The reason I want to connect 2 Linux boxes together via infiniband cards in each is to superscale the processors. I want my main Linux box to offload 'latent' processor queues to the second. If possible, (and before I buy the infiniband PCI-E cards), I would set up as follows: PC #1 AMD x2, Mandriva linux. This is the main PC. When its work queues are two long it should offload to PC #2 AMD socket 754, minimalist Linux with kernel compiled specifically for task. Both have available PCI-E slots. This "p2p network" is only for super scalar experiments with Linux kernel. I am not very knowledgeable with hardware and networking. I need help
Another use is that (If I can even do it) to have a "processor share" PC on direct link with two other PCs (via 2 PCI-E slots). This PC would do nothing but share out its CPU cycles in a demanding task I have. (with another PC sitting around, why not cluster and get the most out of what I am doing - I want to cluster 2 PCs without a switch.)
2007-06-02
09:21:45
·
2 answers
·
asked by
jarrod d
2
in
Computers & Internet
➔ Computer Networking
The Linux kernel and a few modules IS the software, but thanks for your remark. Purchasing new hardware is what we're trying to avoid here, for everyone. As for Linux-HA, it answers the cluster question. Thank you especially for taking this serious.
I've found a forum that discusses the use of a module in a 2.6.*.* that will submit one process at a time from the queue to another remote module (after REQ/ACK/SYN) and the remote module finishes the queue then sends it back to the host, all via infiniband. I'll point serious inquirerors there. deskij0822@clarkstate.oh.us.cc
2007-06-02
18:01:15 ·
update #1