Matt Armstrong (email@example.com)
26 Jan 1998 19:30:35 -0800
Andrew Scherpbier <firstname.lastname@example.org> writes:
> Matt Armstrong wrote: This is actually very similar to the situation
> I had to deal with when I was developing ht://Dig for San Diego
> State University. This also goes completely against the idea of
> virtual hosts (either "soft" or "hard"). Since most people wanted
> to support virtual hosts, I made that the default for 3.0.8b2, but
> you can turn it off and get what you want. Set
> 'allow_virtual_hosts' to 'false' and you should be good to go.
> (Note that this is *not* documented... Sorry)
I have 3.0.8b1 (not b2). I'm also not sure what virtual hosts are. I
can not find allow_virtual_hosts anywhere in the source code, so I
assume this feature isn't in b1?
> The way this option works is that the first time a host is seen, a
> forward and then inverse lookup is performed. The result of this
> inverse lookup will then be used thereafter. (It uses the first
> one, in case there are multiple names for an IP) I understand that
> this may not always result in the correct URLs, but it does
> eliminate all duplicates like you mentioned.
Sounds promising. This would most likely use the fully qualified
domain name. Will the limit_urls_to and exclude_urls config vars use
these longer versions?
> You could possibly tweak the results of the inverse lookup by
> setting your nsswitch.conf to look at files first so that you can
> enter the names you want to show up in the URLs into /etc/hosts.
> This may be a performance hit, though.
Is nsswitch.conf an ht:Dig thing or a Unix thing? (again, grep for
nsswitch in the source brings up nothing)
This archive was generated by hypermail 2.0b3 on Sat Jan 02 1999 - 16:25:33 PST