Re: [htdig] Post-processing removal of dups?


Gilles Detillieux (grdetil@scrc.umanitoba.ca)
Fri, 18 Jun 1999 13:39:54 -0500 (CDT)


According to Aaron Turner:
> On a simular note, I'm having a major delima. Basically I have a SQL DB
> with content that is accessed via PHP. Each "article" in the DB has a URL
> like:
>
> /articles/article.php3?id=x&loc=a.b.c.d
>
> where x, a, b, c, d are postive integers. Basically the id is a unique
> identifier for the article, and loc is the location in the 'tree'. Each
> article can be in 1 or more places in the tree. So:
>
> /articles/article.php3?id=11&loc=1.3.4.10
> /articles/article.php3?id=11&loc=1.3.5.7

Here are a couple more ideas. If you can produce a list of locations that
you want to be excluded from searches, you can add them to the list in the
exclude_urls attribute, or put them as disallow records in robots.txt.

Alternatively, you could change the article.php3 script to add a noindex
tag to its output for any article that's not at it's "primary" location,
i.e. the one where you want it to be for search results.

-- 
Gilles R. Detillieux              E-mail: <grdetil@scrc.umanitoba.ca>
Spinal Cord Research Centre       WWW:    http://www.scrc.umanitoba.ca/~grdetil
Dept. Physiology, U. of Manitoba  Phone:  (204)789-3766
Winnipeg, MB  R3E 3J7  (Canada)   Fax:    (204)789-3930
------------------------------------
To unsubscribe from the htdig mailing list, send a message to
htdig@htdig.org containing the single word "unsubscribe" in
the SUBJECT of the message.



This archive was generated by hypermail 2.0b3 on Fri Jun 18 1999 - 10:59:47 PDT