linux
8 TopicsUnderstanding how Linux reacts to memory pressure using vmstat
Introduction This article is just a quick hands-on practical example to show how a typical Linux system usually reacts when memory pressure builds up quickly. I'll usevmstatto watchfree,buffersandcachedcounters to watch how they react to a sharp memory increase. FYI, my test box has 4 GB of memory: To quickly increase memory usage, I wrote a simple C program that allocates 3 GB of RAM to an array of integers. It then waits 5 seconds before freeing up memory to give us some time to look at memory usage before program frees up memory. Here's the C code: I then compiled the code usinggcccommand: On the left terminal, I'm executingwatch -n 1 vmstatcommand and on the right terminal I'm running my C script./mem: I recorded a couple of seconds after program finished running, i.e. after we free up memory so we can see how free counters recover from memory pressure. I encourage you to go through the above animation 2 times. Once observing free and a 2nd time observing buff/cache. free For the first time, let's focus onfree. freeshows the amount of memory that is 100% physically free/idle. Notice that it starts out with about 2.7G (2759796 KB) and sharply decreases to just 107 MB (107876 KB) in 8 seconds: As soon as memory is freed, it sharply rises back to 3 GB (3039684 KB): buff/cache buff/cache are kept high in a healthy Linux system with plenty of memory available to make the most of available RAM. Cache is the size of memory page cache and buffers is the size of in-memory block I/O buffers. Since 2.4.10, both page and buffer cache are unified so we look at them together here. This is also the reason why top command shows both of them together: For Linux, idle RAM is a waste when it can be used for faster access to read frequently used data or to buffer data for faster retrieval. For this reason, Linux uses a lot of buff/cache so in reality, the total amount of free memory on Linux is the sum offree+buff/cache. In this particular case, the fact that buff/cache is reducing significantly in size, indicates that Linux no longer has the luxury of usingbuff/cachefor other purposes that much and is now prioritising our./memprogram as shown below: They immediately start building up again and 6 seconds later, values are much higher: If memory continued to be a problem, Linux could opt to use its swap address and as memory is swapped in/out of disk, we should see the values ofsi/soincreasing. Appendix - OOM Killer Since we're talking about memory here, it's worth mentioning that when memory is critically low or threatening the stability of the system, Linux can trigger the Out of Memory (OOM) Killer functionality. Its job is to kill process(es) until enough memory is freed so that the system can function smoothly. The OOM Killer selects process(es) that It considers as least important when it comes to damaging system's functionality. The kernel maintains anoom_scorefor each process which can be found in /proc: The lower the score, the less likely OOM killer will kill it as less memory process is using. The higher theoom_scorevalue, the higher the chances of process getting killed by OOM Killer in an OOM situation. As most processes on my Linux box are not using memory, most of them will get a score of 0 (143 out of 147 processes): The last command lists all the scores in my system. I usedsort -uto sort the entries and remove the duplicates in a sorted manner.3.4KViews0likes0CommentsWhat really happens under the hood when we type 'ls' on Linux?
Quick Intro That's a question someone asked me a while ago and while I had a pretty good idea of exec(), read() and write() system calls, I decided to investigate further and to publish an article. In this article, I'm going through what happens when we type the following command: I'll be usingstracedebugging tool to capture the system calls a simple command such as this triggers. For reference, the process ID (PID) of my bash shell process above is the following: Also, if you don't know what system calls are, please refer to the Appendix 1 section. It may seem silly but this is the kind of knowledge that actually made me better at troubleshooting and contributed tremendously to my understanding of Linux OS as this pattern repeats over and over. As you may know, BIG-IP runs on top of Linux. The strace command First off, I'm usingstracecommand whichinterceptsand prints system callscalledby a process¹and the signals²receivedby a process. If I didn't add the redirection2>&1, the egrep command wouldn't work because it filters the file descriptor (FD) 1 (stdout) butstracewrites to FD 2 (stderr). Note that I'm attachingstraceto my bash shell process ID (4716). Fir this reason, I added the-foption to capture the shell's behaviour of creating a new child sub-shell process in order to executels. It does that because if Linux were to executelsdirectly,lsprogram would take over current process (bash shell) resources and we would not be able to go back to it oncelsexecuted becauselswould be no longer available as it's just been overwritten. Instead, bash shell creates an exact copy of itself by callingclone()system call and then executeslsso that ls resources are written to this new process and not to parent process. In fact, this new cloned processbecomesls. Interesting eh? ¹A process is a running instance of a program we execute. ²signals are software interrupts that one process can send to another or to a group of processes. A well known example is kill signal when a process hangs and we want to force a termination of a program. If you don't know what file descriptors and system calls are, please refer to Appendix 1 and 2 at the end. Strace output I can't it's raw output because I've filtered it out but this is what I'm going to explain: In fact, let's work on the output without the shared libraries: Typing "ls" and hitting Enter By default, Linux prompt writes to FD 2 (standard error) which also prints to terminal, just like standard output. When I hit the letterlon my keyboard, Linux reads from my keyboard and writes back to terminal: Both read() and write() system calls receive: file descriptor they're reading from/writing to as first argument the character as second argument the size in bytes of the character What we see here isread()reads from FD 0 (standard input - our keyboard) and writes using write() to FD 2 (standard error) and that ends up printing letter "l" in our terminal. The return value is what's after the equals sign and for both read() and write() it's the number of bytes read/written. If there was an error somehow, the return value would be -1. Bash shell creates a new process of itself The clone() system call is used instead offork()becausefork()doesn't allow child process to share parts of its execution context with the calling process, i.e. the one calling fork(). Modern Linux now mostly usesclone()because there are certain resources (such as file descriptor table, virtual memory address space, etc) that are perfectly fine to be shared between parent↔ child soclone() ends up being more efficient for most cases. So here, my Debian 10.x usesclone()system call: Up to this point, the above process is almost an exact replica of our bash shell except that it now has a memory address (stack) of its own as stack memory cannot be shared¹. flags contains what is shared between the parent process (bash shell) and the new process (the sub-shell that will turn into "ls" command shortly). The flagCLONE_CHILD_CLEARTIDis there to allow another function in the ls code to be able to clean up its memory address. For this reason, we also have to reference the memory address inchild_tidptr=0x7f3ce765ba10(this 0x.. is the actual memory address of ourlscommand). TheCLONE_CHILD_SETTIDstores the child's PID in memory location referenced bychild_tidpt. Lastly,SIGCHLDis the signal that "ls" process will send to parent process (bash shell) once it terminates. ¹A stack is the memory region allocated to a running program that contains objects that are statically allocated such as functions and local variables. There's another region of memory called the heap that store dynamic objects such as pointers. Stack memory is fast and automatically frees memory. Heap memory requires manual allocation using malloc() or calloc() and freeing using free() function. For more details, please refer tothis article here. Execution of ls finally starts I had to filter out other system calls to reduce the complexity of this article. There are other things that happen like memory mappings (using mmap() system call), retrieval of process pid (using getpid() system call),etc. Except for last 2 lines which is literally reading a blank character from terminal and then closing it, I'd just ignore this bit as it's referring to file descriptors that were filtered: The important line here is this one: In reality,execve()doesn't return upon success soI believe the 0 here is juststracesignalling there was no error. What happens here is execve() replaces current virtual address space (from parent process) with a new address space to used independently bylsprogram. We now finally have "ls" as we know it loaded into memory! ls looks for content in current directory The next step is forlscommand to list the contents of the directory we asked. In this case, we're listing the contents of current directory which is represented by a dot: Theopenat()system call creates a new file descriptor (number 3) withthe contents of current directory that we listed and then closes it. Contents are then written to our terminal using write() system call as shown above. Note that strace truncates the full list of directories but it displays the correct amount of bytes written (62 bytes). If you're wondering why FD 3 is closed before ls writes its contents to FD 1 (stdout), keep in mind thatstraceoutput is not the actuallscode! It's just the system calls, i.e. when code needs access to a privileged kernel operation. This snippet from ls.cfrom Linuxcoreutilspackage, shows thatlscode has a function calledprint_dirand inside such function, it uses a native C library functionopendir() to store the contents ofthe directory into a variable calleddirp. In reality, it's not the directory's content but a pointer to it. Theopenat()system call is triggered whenprint_dirfunction executesopendir()as seen below: The bottom line is thatstracewill only show us what is going on from the point of view of system calls. It doesn't give us a complete picture of everything that's going on inlscode. So to answer our question,opendir()function only usesopenat() system call to have access to the contents of current directory. It can then copy it to variable and close it immediately. Terminal prompt gets printed back to us After program closes, Linux prints our terminal prompt back to us: Appendix 1 -What are System Calls? The Linux OS is responsible for management devices, processes, memory and file system. It won't or at least try hard not to let anything coming from the user space to disrupt the health of our system. Therefore, for the most part, tasks like allocating memory, reading/writing from/to files use the kernel as intermediate. So, even printing a hello world in C can trigger a write() system call to write "Hello World" to our terminal. This is all I did: And this is the output of strace filtering onlywrite()system calls: So think of it as Linux trying to protect your computer resources from programs and the end user such as us using a safe API. Appendix 2 - What are File Descriptors? Every program comes with 3 standard file descriptors: 0 (standard input),1 (standard output) and 2 (standard error). These file descriptors are present in a table called file descriptor table that tracks open files for all our programs. When our "Hello World" was printed above, the write() system call "wrote" it to file descriptor 1 (standard output). By default, file descriptor 1 prints to terminal. On the other hand, file descriptor 0 is used byread()system call. I didn't hit enter here, but I just wanted to prove thatread()takes file descriptor 0 as input: It's reading from standard input (0), i.e. whatever we type on keyboard. Standard error (2) is reserved for errors. From FD 3 onwards, programs are free to use if they need to. When we open a file, such file is assigned the next lowest file descriptor number available, which might be 3 for first file, 4 for second file, and so on.4.4KViews2likes0CommentsBuilding a Security Mindset
****Disclaimer: I'm not a psychologist, psychiatrist, brain matter expert or guru of all monkey manners. This post is my viewpoint from what I've experienced and learned... or tried to learn. If you have a different viewpoint, please post a reply, and let’s have a rousing discussion about it!**** Humanity's reliance on interdependence has required us to trust. It makes life easier, faster, and simpler to live if we have some basic trusts. Every day I get in my car and I trust that the engineers that designed it didn't wire the spark plugs into the fuel tank. I trust that the other drivers on the road are somewhat trained to not kill me (though some days rush hour can really test that trust). I trust those that prepare my food, repair my car, protect my streets, and all other manner of things. This trust is what makes us vulnerable. Every person I trust is another potential exploit path. People trusted Bernie Madoff with their life savings, only to have that trust turned against them. So, what can we do about it? Should I go live in a remote corner of the woods, make my own clothes from tree bark, and do everything myself? Tempting, I’m not going to lie, as I had that dream as a kid(http://en.wikipedia.org/wiki/My_Side_of_the_Mountain). But no, I will continue my trusting existence. I will keep getting in my car every day, turning that key and hoping I don't explode. Why? Because, to survive in this world, we need to have a certain amount of trust in it. If we don't have a level of trust, each of us would spend our entire short and stress filled days worrying about every interaction and movement we make.... What kind of life would that be? Not a life I think I would enjoy taking part in. Of course, this leads to an issue. How do I have a life where I am safe, but not spending each moment worrying about the next? There is no simple answer, but I believe that I can help increase my chances by remembering one thing. No matter what.. Be inquisitive. Ask questions, get answers, search for answers, or create answers, whatever it takes to solve the riddle. The foundation of a great security mind is an inquisitive spirit. Hackers want to know how things work. What makes the car go or the doorbell ring? How do magnets work or where is Schrödinger's cat? If a person is inquisitive, they would seemingly be less likely to accept the status quo without asking… why? Why do you need to know my password mister IT help guy? Why are you offering me the chance to launder money from Africa, oh great Prince Malhaka? Why do we store passwords in plaintext? The inquisitive nature can then help inspire creativity. When I get an answer that I don't find fulfilling, I often go in search of my own. Maybe I can make a different path and generate a better answer. Or more likely, in my search for an answer, I will find a better understanding of the question, which allows me to accept the original answer. So, how do you take a trusting vulnerable human being and grant them the inquisitive spirit to lay the foundation needed for a security mindset? In my mind, it seems to boil down to awareness and experience. The end goal is to keep asking those questions. Make it a habit to ask questions and be curious about you interactions in the world. If you do, you will begin to realize many of the places where you subconsciously place trust, and be better able to evaluate that trust. Have it become second nature, and you'll be on the path to building a security mindset.249Views0likes0CommentsNew Blogger - whoami
I suppose I should start this thing with a quick whoami jmichaels@blog:/# whoami jmichaels jmichaels@blog:/# finger jmichaels Login: ----- Name: Josh Michaels Directory: /seattle Shell: All Last login Mon Jan 1 00:00 (PST) on tty1 No mail. No Plan. jmichaels@blog:/# history University Support Technician Hardware Systems Support: Fortune 50 Network Administrator IP/DNS Administrator: Fortune 50 Messaging Infrastructure Security Analyst Network Engineer jmichaels@blog:/# groups jmichael jmichaels : CFO Black Lodge Research Defcon Goon SakuraCon Section 9 DC206 ConMonkey Anonymous jmichaels@blog:/# cat /var/tmp/spaghetti CISSP,Sec+,CICA, blah, blah, blah jmichaels@blog:/# cat /var/tmp/blurb My stance on security is simple. Ask questions till you get an answer, and if the answer doesn’t suit you, try to find a better one. Goal of my blogging (I’ve been told I have to have one) is to vent, put ideas out there, and hopefully make people ask questions. I’ll try to keep the trolling to a minimum. Take all of my posts light heartedly and enjoy.170Views0likes0CommentsIPv6 Does Not Mean The End of IPv4
I know I’ve touched on this topic in some of my “IT Management” overview blogs, but it’s an important one, so I thought I’d give it a blog all its own. Even though we have a living myth that cats and dogs never get along, we all know it just isn’t true. There are any number of cats and dogs that live together and do just fine – including our two, Sun Tzu (Dog) and Nietzsche (Cat). While we all enjoy the joke, we know that deep down, the two are compatible, and it really is just a fun myth to keep alive. The same is true of IPv4 and IPv6. Like cats and dogs compete for resources around the house, IPv4 and IPv6 give the appearance of being incompatible around the datacenter. But like a cat and dog will work out a pecking order that works for them to insure both receive a sufficient amount of resources (primarily food and treats), the technology marketplace has worked out a slew of solutions that allow IPv4 and IPv6 to work together. From F5’s IPv6 Gateway product to Open Source instructions for making an IPv6 gateway out of a Linux box, you don’t have to choose. Sun Tzu and Nietzsche early in life The point of these products is to translate between the two protocols to make certain that incoming and outgoing messages are correctly formatted for their recipients, but that’s not a very exciting description. What is exciting is the idea that you can put an IPv6 address on the Internet, and translate incoming packets to IPv4 before they reach your servers. That means you can support IPv6 in the place that matters – on the public Internet, where you have to share the shrinking number of IPv4 addresses or move to IPv6 – and not change every server in your datacenter at once. That’s huge, because upgrading every machine in your datacenter would be painful to say the least. While it might just come off without a hitch, it might not. Another great benefit is that you don’t have to drop support for IPv4 to support IPv6. That matters a lot, because many clients out there don’t yet support IPv6, but the future will definitely belong to the newer technology, just because we need the new address space. Utilizing a gateway, you can support both until IPv6 is ubiquitous, then slowly turn off IPv4 support. So the short summary – with an IPv6 gateway, you can serve your current customers, prepare for serving future customers, and not change your entire datacenter over a single weekend… If you’re lucky enough not to have problems doing so anyway. But that’s our job, making cats and dogs play nicely together. Related Blogs: Don MacVittie - F5 BIG-IP IPv6 Gateway Module F5 News - IPv6 Gateway No IPv4 For You! F5 Friday: 'IPv4 and IPv6 Can Coexist' or 'How to eat your cake and ... IPv6: Yeah, we got that F5 Friday: Thanks for calling... please press 1 for IPv6 or 2 for IPv4. IP::addr and IPv6 F5 Makes IPv6 Connectivity a Reality for Interop 2011 Las Vegas ... IPv6: Not When but How?180Views0likes0CommentsBitTorrent For Distributed Deployments
About 3 years ago I was working at a small startup and had a conversation with one of my coworkers about using BitTorrent for distributed deployments in our datacenter. We got on the whiteboard, drew up some preliminary ideas, and then got our boss in the room to show him our idea. Being the “dark days” of BitTorrent, most of you can probably guess what came next. He said absolutely not, we are a legitimate company doing legitimate business and if anyone found out we were running BitTorrent internally it could tarnish our image. While that blocked us from rolling it out in our environment, we honestly didn’t really need that much throughput. We had a very heterogeneous environment where there were only double digits of any particular server class therefore we stayed with the central repository distribution model. Fast forward a few years, BitTorrent becomes a staple in the open source software distribution arena. Almost any Linux distribution imaginable can be had via BitTorrent these days and at a fraction of the cost of what it would cost to host them centrally. I would call this the “transitional period” when BitTorrent started to receive something other than negative press. I hadn’t really heard of anyone using BitTorrent in the capacity that we had originally discussed until a few weeks ago when Twitter’s Engineering group posted a blog on their implementation. They were using a single Git server to host all of their software packages and then instructing their application servers to all download from this one server. This was sufficient in the beginning, but as we all know Twitter has grown by leaps and bounds since its inception in 2006. Hitting a single Git server with thousands of application servers just didn’t work. Enter their new system of distributed deployments: Murder. Murder has nothing to do with the nightly news, it is also defined as a “flock of crows,” which segues nicely into Twitter’s bird theme. It was written by Larry Gagea who is an infrastructure engineer for Twitter. Murder is deployed using Python and Capistrano. Python doing the heavy lifting for the BitTorrent traffic and Capistrano instructing the application servers. Given that BitTorrent was originally designed to run on the Internet with limited throughput and relatively high latencies, there had to be some modifications to the standard BitTorrent options. They decreased the timeouts on chunk transfers in order to not have machines hang waiting for a chunk that may not be there. Encryption was not needed to bypass ISP gateway, so it was disabled to reduce the CPU overhead. Distributed Hash Tables were also turned off in order to encourage a more linear distribution, which is discussed in length in Larry’s presentation. Lastly, UPnP was disabled as it was not needed for NAT traversal and makes traffic patterns less predictable. If you are interested in playing with Murder, it can be downloaded from GitHub: http://github.com/lg. If you have the time, I would also encourage you to watch Larry’s half hour talk on the system. He outlines why they did what they did and what tools are available to build a similar distributed deployment system that isn’t Ruby or Python-centric. It is very cool to see such a neat and innovative protocol finally get some good press after all these years.238Views0likes0CommentsDevCentral Top5 11/06/2009
While ramping up for "The Next Big Thing" continues amongst the DC staff, there is much to talk about in regards to content that's happening in the here and now, not just in the eagerly awaited future (with jet-packs and stuff…). DevCentral has seen its share of cool content this week, as it does every week, so let's talk about what needs talking about. Bringing you everything from TCL strings to a philosophical discussions of when vs. where and which is more important, I'm here with my Top5 picks for the week. And here they are: When Is More Important Than Where in Web Application Security http://devcentral.f5.com/s/weblogs/macvittie/archive/2009/11/06/when-is-more-important-than-where-in-web-application-security.aspx In this post Lori was as insightful and informative as ever, discussing why being timely is more important, in general, than being perfect when it comes to application security. It's a pretty simple concept to me. When it comes right down to it, no one really cares where you solve a security problem, they care about when you solve it. It's well and good that you want to argue that things should be solved at the app layer vs. the WAF, but if I can provide a solution in 10 minutes...how long is it going to take you to patch every single application for even a miniscule security flaw? I agree just as much with Lori's reminder that WAF and app security models shouldn't compete. They are complimentary in the war against attacks, not mutually exclusive, and should be treated as such. Every time someone tries to tell you which method is more "proper" or "correct", though, I'd ask them just how much they care about being proper in very real terms. How much is it worth in terms of hours (or days) of their application being exposed? At what point is it worth trading 20, 40, 120 hours of being exposed to a known exploit for an ounce of being "proper", which is already debatable at best, as opposed to getting the fix in place in a fraction of the time? Lori being insightful and informative isn't anything new. She knew she had a solid point to make and I tend to agree. What she didn't know was just how timely she was in setting the stage for her point to be illustrated, but we'll get to that in a moment. They call that foreshadowing, I think. I can tell you're on pins and needles. 20 Lines or Less #31 - Traffic shaping, header re-writing, and TLS renegotiation http://devcentral.f5.com/s/weblogs/cwalker/archive/2009/11/06/20-lines-or-less-31-ndash-traffic-shaping-header-re-writing.aspx Behold, your suspense is relieved! I unveil before your very eyes the payoff to Lori's unintentional setting of the stage. But how, you ask, does the 20LoL tie in with the When vs. Where of App Security? Via the much discussed TLS renegotiation vulnerability that has been burning up the net, of course. When a security measure as deeply rooted and common as TLS encryption is found to be susceptible to attacks, there is much to talk about, and talk they have. It turns out that via a man in the middle attack would-be ne'er-do-wells have the potential to insert information into a renegotiated SSL connection. This is very bad. What's very good, however, is that a user from the DevCentral community drafted a simple fix, at least in their deployment, the very next day. That's the power of iRules. Agility at its very finest, if I've ever seen it. We could debate all day where the best place, technically speaking, to implement the fix is. Or we could just fix it in about 10 minutes of coding and another 30 minutes of testing, and be done with it. That's just one of the rules in the 20LoL, of course. There are two more very cool examples of iRules doing the cool things they do in less than 21 lines of code. Check them out. iRules 101 - #16 - Parsing Strings with the TCL Scan Command http://devcentral.f5.com/s/Default.aspx?tabid=63&articleType=ArticleView&articleId=2346 Jason digs into the amazingly powerful yet often overlooked scan command in his latest contribution to the iRules 101 series. The scan command has some pretty staggeringly powerful capabilities to parse strings in an ultra efficient manner. It takes a little getting used to but it's definitely a command that has potential beyond what's obvious at first glance. Jason does a good job of breaking down some of the options and giving clear examples of not only the command itself but how you might use it in the context of an iRule. Very cool stuff, and worth a read for any current or would be iRulers out there. Operations Manager Debugging Part I: Top 10 Tools for Developing and Debugging Management Packs http://devcentral.f5.com/s/weblogs/jhendrickson/archive/2009/11/04/operations-manager-debugging-part-i-top-10-tools-for-developing.aspx You've been hearing a lot about the Management Pack lately. That's not likely to change, especially if they keep putting out not only consistent, timely releases with new features, but awesome documentation and commentary along the way. Case in point, Joel Hendrickson put up a blog post this week about his Top 10 favorite tools for the kind of debugging he ends up doing often times as a member of that team. Whether or not you're directly involved with the Management Pack, this is a very cool list. It's interesting to see him walk through each tool, what it does and in some cases how he uses them. I'm always a sucker for hearing a geek talk about … well … being a geek, and that's just what Joel's up to in this informative post. Take a look for all your code debugging needs. pyControl Just as Happy on Linux http://devcentral.f5.com/s/weblogs/jason/archive/2009/11/04/pycontrol-just-as-happy-on-linux.aspx In response to the many questions asking about pyControl and whether or not it's viable as a Linux solution to iControl programming, Jason put together this tidy little post that not only answers the question (yes, by the way), but shows you just how to get started. This was a cool reminder to me not only of how awesome the pyControl project is, but of just how easy it can be to get started digging into iControl and all the cool things that it can do. With just a few commands, outlined in Jason's post, you can have an environment up and running, ready to start developing. I'm even more excited to see what's coming in pyControl2, whenever I get a chance to play with that. But that's a post for another day. There you have it, five picks for this week that you just really should not miss. As always, don't be shy with your feedback, and check out previous versions here: http://devcentral.f5.com/s/Default.aspx?tabid=101 #Colin173Views0likes1CommentWhen 'Free' Isn't. Help me choose a distro!
Lori and I run a lot of servers out of the house, and we're replacing one that has served us well but is nearing the end of its useful life. Since this particular server is publicly exposed, it gets Linux by default. That's our policy, and it works for us. Last night I finally had the server completely working, and went to install the OS. We have licensed copies of most Linux distros, but they're all getting a little aged. It's been years since I researched and installed a free Linux distro, so I had gone surfing and settled on giving Fedora a try. It's close to RedHat, which I have used a lot in my career. I've never been one to bash OSS just because people are giving their time, and I'm getting it for free. I'm willing to do a little extra work in that regard rather than just bad-mouth it when things go wrong. I hit my limit last night. The first install failed because Fedora assumes it will be doing the RAIDing of your disks. So even though we had a RAID card in and configured, it RAIDed over top of the logical volume. Needless to say, at boot time it was rather confused. No big deal, I don't mind letting software do the RAID, even though it's less efficient. So I turned off RAID in the hardware and reinstalled. Upon reboot after the reinstall, the system was spewing error messages that effectively locked me out of the console on the machine. Every local console was receiving messages at the rate of several a second, interfering with typing any commands. I managed to get in via SSH from another machine, and then when I was certain I could do something, I went out and searched the web for my message. NowI downloaded this distro last week, and I discovered that this particular bug has been patched since May. So why is the official Fedora site still distributing it with the error? Anyway, I updated the system, a process that takes about three times as long as the install, and rebooted. At this point, it wasearly morning and I had to work today. When the reboot came up, the routing tables were wiped. I tried to update them, but route wouldn't accept the commands. At this point, I gave up in disgust and went to bed. I'm done with Fedora, the days that I was willing to invest weeks in making a distro run are gone. I have other things to do. So, here's the deal. I need a new server-quality distro. I figure we've built a community 20,000 strong out here, you all can help me pick one. My requirements are (1) easy to install - Assume I'm capable of tweaking my kernel and building from sources (I have), and that I have more important things to do with my time (I do). (2) Server quality - When all is said and done, this will be an app server. It needs to support LAMP and more - including SAMBA. It will sit behind a BIG-IP, but that shouldn't matter at all. Even if the X stuff is in the distro, I won't install it - this is a server. Remember, no Windows suggestions - I'm all about "right tool for the job". For this job, Windows isn't it (though we do use it for some things). So drop a comment - what do you think I should be using? I've installed about all of them but Ubuntu, and I've helped my daughter install Ubuntu, so I've toyed with all of them at least a little bit. If no one drops a suggestion, I'll find one myself, just thought this would be a fun discussion to have. Until you comment, Don. Reading: Nothing, see above. Imbibing: Coffee, Vault, and RedBull (see above)280Views0likes5Comments