Subject: [htdig3-dev] Re: More info on htdig hang (PR#960)
From: Robert La Ferla (firstname.lastname@example.org)
Date: Sat Dec 02 2000 - 10:35:26 PST
Here are my thoughts for your consideration:
Does the cookie warehouse have to be persistent? i.e. Couldn't they be stored in memory for the duration
of the dig process? Perhaps, keep the hostnames as keys into a hash table whose values is another hash
table where the key is the cookie id and value is a data structure (or cookie class instance) holding the
details of the cookie. It may also be a good idea, as an enhancement, to have a configurable option that
actually stores the cookies to a file (http://www.netscape.com/newsref/std/cookie_spec.html) when htdig
finishes and loads it when it starts.
Gabriele Bartolini wrote:
> > Making a HEAD call before the GET
> >Try to get through to host www.sandiegozoo.org (port 80) via HTTP
> > 6 - Connection already open. No need to re-open.
> > Connecting via TCP to (www.sandiegozoo.org:80)
> >Taking advantage of persistent connections
> >Host: www.sandiegozoo.org
> >User-Agent: htdig
> HA !!! I got it ... I understand the problem. It's because we don't support
> cookies and so PHP sessions cannot work. So, everytime it gets that URL, a
> new session ID is assigned (poor server, poor baby). I think it's not
> connected to persistent connections, rather to cookies mangament, which
> ht://Dig lacks.
> I think that Cookie passing is pretty easy to implement in HtHTTP class,
> but what about the 'data warehouse' of cookies in the db, in order to track
> all of them and send them correctly back?
> Any ideas?
> P.S.: Thanks Robert for your help.
> Gabriele Bartolini, Web Programmer
> email@example.com | http://space.tin.it/io/gabrbart
> "US Navy uses NT. C'mon Saddam, let's go party!"
To unsubscribe from the htdig3-dev mailing list, send a message to
You will receive a message to confirm this.
This archive was generated by hypermail 2b28 : Mon Dec 04 2000 - 05:43:00 PST