Subject: Re: [htdig] A little automation
Date: Thu Jan 04 2001 - 12:07:18 PST
At least IMO, true operational requirements for any such system would be
quite user-specific. The (full) set of user requirements would tend to
Varying frequencies, perhaps even within the same URL.
Inclusion and/or Exclusion of specified nodes, within a url.
Varying underlying-database formats.
If this were implemented AT ALL, I'd see it in the form of an independent
sub-system; output taking the form of .conf files, which would then be
thru a htdig/htmerge script. The format of the underlying database would be
"up to the user"; perhaps accessed thru something similar to Perl/DBI.
In a message dated 1/4/01 11:43:05 AM US Mountain Standard Time,
<< When using a very large list of URL's to index, it can get pretty tough to
keep track of
which sites to index or not. In other words, a site that I remove from the
list could end
up back in by error and be indexed again.
I can think of a simple way to take care of that, some sort of database
maintain that list but would it be possible to add some smarts to htdig to
take care of
To unsubscribe from the htdig mailing list, send a message to
You will receive a message to confirm this.
List archives: <http://www.htdig.org/mail/menu.html>
This archive was generated by hypermail 2b28 : Thu Jan 04 2001 - 12:20:16 PST