Geoff Hutchison (email@example.com)
Sun, 28 Mar 1999 23:39:36 -0500 (EST)
On Sun, 28 Mar 1999, Hans-Peter Nilsson wrote:
> There was a bug in the parsing of URLs, before calling
> Retriever::got_href. I believe that URL::parse should reset
> the contents (the member variables) before extracting the
> different parts.
I contributed the beginnings of my URL.cc overhaul, but there's more to be
done, as you noticed. This should be done here, IMHO.
> obviously buggy and/or incomplete since that can only work for
> some cases if URL::parse was called from URL::URL(char *ref, URL
> &parent), where the URL gets "reconstructed" the same way that
> URL::parse would do later. In no case was the URL "normal".
Yes, I noticed this in my testing, but did not have a chance to follow up
on it before I left.
> I believe URL::parse and URL::URL(char *ref, URL &parent) should
> be unified; setting defaults and call a common parse method
> would clean up some.
This was the direction I was starting to move. Since I wasn't getting
there fast, I decided to commit what I had and come back. The older code
generally *works*, but it could be cleaner. Problems arise with what
should happen when an error occurs. Should we try to figure out what an
incorrect URL should have been?
> part of the URL class; an URL does not intuitively have a
> "hopcount" attribute IMHO.
No, but we need to keep it somewhere... <sigh>
> My changes may also have uncovered other bugs related to
> handling of URLs, but now people (hopefully) have a better clue
> if/when that happens.
I'll take a look tomorrow. I'm building up a small regression suite of
URLs, both parent and child, including some incorrect ones.
To unsubscribe from the htdig3-dev mailing list, send a message to
firstname.lastname@example.org containing the single word "unsubscribe" in
the SUBJECT of the message.
This archive was generated by hypermail 2.0b3 on Sun Mar 28 1999 - 20:54:48 PST