English Deutsch Français Italiano Español Português 繁體中文 Bahasa Indonesia Tiếng Việt ภาษาไทย
All categories

Background:

I have a CGI script (served by Apache on Linux) that gets called about once per second or more on my web-site. This script calls (via perl LWP) a webservice on an external site (see http://gisdata.usgs.gov), parses the results and returns a subset of them.

Problem:

It usually returns pretty quick (<200ms), but sometimes there is 1+ seconds lost in just resolving the IP of the domain of the 3rd party web-service.

Solutions?

I could hard-code the IP, but that's always error-prone, there's got to be a better way. Plus they're using a load-balancer, so I don't want to target one IP if they expect the traffic to be balanced.

What I want to avoid is the round-trip to my ISP's name-server for *each and every* request to resolve the same domain.

I think I could run bind on the server, and use that to resolve, but is there a simpler way? An app I can run in the background that will just keep track of commonly-looked-up DNS names?

2007-02-01 06:39:09 · 1 answers · asked by fixedinseattle 4 in Computers & Internet Programming & Design

1 answers

it's not perl it's not apache.... dns caching is tricky... I know it can be managed but I don't know enough to shoot from the hip.

But, you can simply add the gov site with which you are concerned to your /etc/hosts file and bypass the dns... of course if our government moves, you will have to update that file... but since you know perl... you can do a lookup ever hour or day and fix the hosts file!


the time difference is the lookup vs using the cached address.

depending on the kind of load balancing they are doing, they may have just 1 outside address, then the route to various inside addresses.

Also, you could do your own dns... HA HA!

2007-02-01 07:25:57 · answer #1 · answered by jake cigar™ is retired 7 · 0 0

fedest.com, questions and answers