= Hardware Wishlist =
-== snapshot.debian.org ==
-
-Currently snapshot.debian.net is operated by a single individual on their
-hardware at home. It is a service archiving old binary and source packages.
-Access to old packages, which have in the meantime been deleted from the
-regular debian archive, allows Developers and Users to debug upgrade problems,
-to check when regressions were introduced, to check if old packages had been
-miscompiled, to downgrade to older versions while bugs are being fixed etc.
-
-Requirements:
-
-* Two systems with sufficient storage, hosted somewhere not in the US (so we
- can import old non-US into it).
-* Storage should be at least on the order of 8T (currently snapshot.d.n is
- using about 4T), and easily expandable
-* remote management stuff
-
-Proposers:
-
-* Joerg Jaspert/ftpteam
-* Peter Palfrader
-
-
== data.d.o ==
A service by ftpmaster to host larger arch-all packages for datasets like
-== merge.d.o ==
-
-I'd like to use Ubuntu's merge-o-matic to generate diffs between Debian's
-source archive and the source archives of various other Debian-based distros
-(Knoppix, Freespire, Mepis, Sidux, gNewSense and so on). The result will be
-much like patches.ubuntu.com. merge-o-matic downloads source packages, unpacks
-them and generates diffs against unpacked pure debian source packages. As a
-result lots of disk space would be required since the whole Debian archive plus
-an unpacked version of it and the same for each derivative distribution is
-needed.
-
-Requirements:
-
-* a system with sufficient disk space (how much is that?)
-
-Proposers:
-
-* Paul Wise
-
-
== source.debian.org ==
Idea: A machine which has all sources extracted from orig.tar.gz + diff applied for all dists.
* Noel Koethe
-== new ftp-master ==
-
-ftp-master's hardware is becoming old and warranty is running out (we keep extending it, but that's not for free either).
-
-We probably should look into getting a new machine somewhere in the US. Apart
-from recent CPUs, reasonable amount of ram (16-32g?) and the usual management
-fu primary requirement is reliable storage. We probably should look at some
-raid6, either internal or external for the master copy of the archive in the range
-of 2-4T (is that right, ftp folks?). Additionally some faster internal storage
-could be useful for the database part of the ftp archive (4 disks raid10?).
-
-== new backup.d.o ==
-
-bartok, the current backup.d.o, is getting old too, and there is not all that
-much spare space.
-
-We should be looking at replacing that eventually. A raid6 of big SATA disks
-would probably provide the required robustness and be cost-effective.
-
-
= Historical =
These proposals probably are no longer relevant:
* two systems with fast disks (we don't need that much storage, some 200 gigs
should suffice easily for a while - say 4x140 gig raid10), some ram for
caching (say 16g?), and the CPU to handle the scripts (if we can get two quad
- cores per box that would be great)
+ cores per box that would be great).
* one master that processes incoming email, changing bugs as required, and
- pushes the changes to the web facing servers
+ pushes the changes to the web facing servers.
Proposers:
domain at ubc/ece once the blade there is fixed. And we can probably move
incoming MX to a blade instance in darmstadt and another one at the same
place as new bugs-master will be if the bugs folks still want that.
+
+== new ftp-master ==
+
+ftp-master's hardware is becoming old and warranty is running out (we keep extending it, but that's not for free either).
+
+We probably should look into getting a new machine somewhere in the US. Apart
+from recent CPUs, reasonable amount of ram (16-32g?) and the usual management
+fu primary requirement is reliable storage. We probably should look at some
+raid6, either internal or external for the master copy of the archive in the range
+of 2-4T (is that right, ftp folks?). Additionally some faster internal storage
+could be useful for the database part of the ftp archive (4 disks raid10?).
+
+Status (2010-05-20):
+
+HP DL380 G6 machine in place at CS Dept. Brown University. In setup