server
15 TopicsDid Serverssl profile require certificate?
Hi We want to use F5 as SSL bridging (Decrypt using ssl client profile and re-encrypt using serverssl profile) Problem is our server using self-sign root certificate and certificate name is IP server (eg. 10.10.10.1 ) How do we config SSL server profile ? Should we just choose None on certificate setting? Should we import self-sign root certificate server using into BIG-IP? where to import? Thank you Kridsana1.1KViews0likes5CommentsHSTS / ASM connection drops
Hi All, We currently implement HSTS as an iRule on the F5, we also decrypt and inspect traffic with ASM. There are discussions internally on our side about adding HSTS to the web server responses on the actual server rather than from the F5. If we were to do this, is it possible/likely that F5 ASM decrypting the traffic will then result in connection drops? Thank you521Views0likes4Comments"Server" Field Not Shown in CURL -I command
Dear all, Using F5's command line, I have been trying to find out from the HTTP headers on what web server is serving out the web page. Using curl -I and curl -v to get verbose response, the field "Server" was never shown. Other URLs did show the "Server" portion only a particular URL. Anyone encountered this before or has knowledge on hiding certain HTTP header fields to show from CURL?307Views0likes2CommentsRemote syslog server
I have configure remote logging server (solarwind kiwi log server). Server is receiving a lot of logs per second and all of them have a info severity. Although I have changed the severity in option to minimum but still getting 3,4 logs per second. Please help me how to configure it.780Views0likes6CommentsDead Server Redirect
Hi, We have a server that is down with an external DNS and IP address that I would like to point at an internal IP that is correctly working. Here is what I have done: Created a node for the working server Added that node to a newly created pool Created a VS with the external IP that is dead as the VS IP Have an iRule that routes traffic from that VS to the Pool I believe that should work, but it times out and does not redirect. I have also tried pinging the address and it times out too. The only thing that I do see is the DNS name is different from the ping server name that responds. Anything that you might think That I should be doing any help would be appreciated. Thanks....Ray372Views0likes5CommentsThe Disadvantages of DSR (Direct Server Return)
I read a very nice blog post yesterday discussing some of the traditional pros and cons of load-balancing configurations. The author comes to the conclusion that if you can use direct server return, you should. I agree with the author's list of pros and cons; DSR is the least intrusive method of deploying a load-balancer in terms of network configuration. But there are quite a few disadvantages missing from the author's list. Author's List of Disadvantages of DSR The disadvantages of Direct Routing are: Backend server must respond to both its own IP (for health checks) and the virtual IP (for load balanced traffic) Port translation or cookie insertion cannot be implemented. The backend server must not reply to ARP requests for the VIP (otherwise it will steal all the traffic from the load balancer) Prior to Windows Server 2008 some odd routing behavior could occur in In some situations either the application or the operating system cannot be modified to utilse Direct Routing. Some additional disadvantages: Protocol sanitization can't be performed. This means vulnerabilities introduced due to manipulation of lax enforcement of RFCs and protocol specifications can't be addressed. Application acceleration can't be applied. Even the simplest of acceleration techniques, e.g. compression, can't be applied because the traffic is bypassing the load-balancer (a.k.a. application delivery controller). Implementing caching solutions become more complex. With a DSR configuration the routing that makes it so easy to implement requires that caching solutions be deployed elsewhere, such as via WCCP on the router. This requires additional configuration and changes to the routing infrastructure, and introduces another point of failure as well as an additional hop, increasing latency. Error/Exception/SOAP fault handling can't be implemented. In order to address failures in applications such as missing files (404) and SOAP Faults (500) it is necessary for the load-balancer to inspect outbound messages. Using a DSR configuration this ability is lost, which means errors are passed directly back to the user without the ability to retry a request, write an entry in the log, or notify an administrator. Data Leak Prevention can't be accomplished. Without the ability to inspect outbound messages, you can't prevent sensitive data (SSN, credit card numbers) from leaving the building. Connection Optimization functionality is lost. TCP multiplexing can't be accomplished in a DSR configuration because it relies on separating client connections from server connections. This reduces the efficiency of your servers and minimizes the value added to your network by a load balancer. There are more disadvantages than you're likely willing to read, so I'll stop there. Suffice to say that the problem with the suggestion to use DSR whenever possible is that if you're an application-aware network administrator you know that most of the time, DSR isn't the right solution because it restricts the ability of the load-balancer (application delivery controller) to perform additional functions that improve the security, performance, and availability of the applications it is delivering. DSR is well-suited, and always has been, to UDP-based streaming applications such as audio and video delivered via RTSP. However, in the increasingly sensitive environment that is application infrastructure, it is necessary to do more than just "load balancing" to improve the performance and reliability of applications. Additional application delivery techniques are an integral component to a well-performing, efficient application infrastructure. DSR may be easier to implement and, in some cases, may be the right solution. But in most cases, it's going to leave you simply serving applications, instead of delivering them. Just because you can, doesn't mean you should.5.9KViews0likes4CommentsLike Cars on a Highway.
Every once in a while, as the number of people following me grows (thank you, each and every one), I like to revisit something that is fundamental to the high-tech industry but is often overlooked or not given the attention it deserves. This is one of those times, and the many-faceted nature of any application infrastructure is the topic. While much has changed since I last touched on this topic, much has not, leaving us in an odd inflection point. When referring to movies that involve a lot of CGI, my oldest son called it “the valley of expectations”, that point where you know what you’d like to see and you’re so very close to it, but the current offerings fall flat. He specifically said that the Final Fantasy movie was just such a production. The movie came so close to realism that it was disappointing because you could still tell the characters were all animations. I thought it was insightful, but still enjoyed the movie. It is common to use the “weakest link in the chain” analogy whenever we discuss hardware, because you have parts sold by several vendors that include parts manufactured by several more vendors, making the entire infrastructure start to sound like the “weakest link” problem. Whether you’re discussing individual servers and their performance bottlenecks (which vary from year to year, depending upon what was most recently improved upon), or network infrastructures, which vary with a wide variety of factors including that server and its bottlenecks. I think a better analogy is a busy freeway. My reasoning is simple, you have to worry about the manufacture and operation of each vehicle (device) on the road, the road (wire) itself, interchanges, road conditions, and toll booths. There is a lot going on in your infrastructure, and “weakest link in the chain” is not a detailed enough comparison. In fact, if you’re of a mathematical bent, then the performance of your overall architecture could be summarized by the following equation: Where n is the number of infrastructure elements required for the application to function correctly and deliver information to the end user. From databases to Internet connections to client bandwidth, it’s all jumbled up in there. Even this equation isn’t perfect, simply because some performance degradation is so bad that it drags down the entire system, and other issues are not obvious until the worst offender is fixed. This is the case in the iterative improvement of servers… Today the memory is the bottleneck, once it is fixed, then the next bottleneck is disk, once it is improved, the next bottleneck is network I/O… on and on it goes, and with each iteration we get faster overall servers. And interestingly enough, security is very much the same equation, with the caveat that a subset of infrastructure elements is likely to be looked at for security, just because not everything is exposed to the outside world – for example, the database only need be considered if you allow users to enter data into forms that will power a DB query directly. So what is my point? well simply put, when you are budgeting, items that impact more than one element – from a security or performance perspective – or more than one application, should be prioritized over things that are specific to one element or one application. The goal of improving the overall architecture should trump the needs of individual pieces or applications, because IT – indeed, the business – is built upon the overall application delivery architecture, not just a single application. Even though one application may indeed be more relevant to the business (I can’t imagine that eBay has any application more important than their web presence, for example, since it is their revenue generation tool), overall improvements will help that application and your other applications. Of course you should fix those terribly glaring issues with either of these topics that are slowing the entire system down or compromising overall security, but you should also consider solutions that will net you more than a single-item fix. Yes, I think an advanced ADC product like F5’s BIG-IP is one of these multi-solution products, but it goes well beyond F5 into areas like SSDs for database caches and such. So keep it in mind. Sometimes the solution to making application X faster or more secure is to make the entire infrastructure faster or more secure. And if you look at it right, availability fits into this space too. Pretty easily in fact.231Views0likes0CommentsLayer 4 vs Layer 7 DoS Attack
Not all DoS (Denial of Service) attacks are the same. While the end result is to consume as much - hopefully all - of a server or site's resources such that legitimate users are denied service (hence the name) there is a subtle difference in how these attacks are perpetrated that makes one easier to stop than the other. SYN Flood A Layer 4 DoS attack is often referred to as a SYN flood. It works at the transport protocol (TCP) layer. A TCP connection is established in what is known as a 3-way handshake. The client sends a SYN packet, the server responds with a SYN ACK, and the client responds to that with an ACK. After the "three-way handshake" is complete, the TCP connection is considered established. It is as this point that applications begin sending data using a Layer 7 or application layer protocol, such as HTTP. A SYN flood uses the inherent patience of the TCP stack to overwhelm a server by sending a flood of SYN packets and then ignoring the SYN ACKs returned by the server. This causes the server to use up resources waiting a configured amount of time for the anticipated ACK that should come from a legitimate client. Because web and application servers are limited in the number of concurrent TCP connections they can have open, if an attacker sends enough SYN packets to a server it can easily chew through the allowed number of TCP connections, thus preventing legitimate requests from being answered by the server. SYN floods are fairly easy for proxy-based application delivery and security products to detect. Because they proxy connections for the servers, and are generally hardware-based with a much higher TCP connection limit, the proxy-based solution can handle the high volume of connections without becoming overwhelmed. Because the proxy-based solution is usually terminating the TCP connection (i.e. it is the "endpoint" of the connection) it will not pass the connection to the server until it has completed the 3-way handshake. Thus, a SYN flood is stopped at the proxy and legitimate connections are passed on to the server with alacrity. The attackers are generally stopped from flooding the network through the use of SYN cookies. SYN cookies utilize cryptographic hashing and are therefore computationally expensive, making it desirable to allow a proxy/delivery solution with hardware accelerated cryptographic capabilities handle this type of security measure. Servers can implement SYN cookies, but the additional burden placed on the server alleviates much of the gains achieved by preventing SYN floods and often results in available, but unacceptably slow performing servers and sites. HTTP GET DoS A Layer 7 DoS attack is a different beast and it's more difficult to detect. A Layer 7 DoS attack is often perpetrated through the use of HTTP GET. This means that the 3-way TCP handshake has been completed, thus fooling devices and solutions which are only examining layer 4 and TCP communications. The attacker looks like a legitimate connection, and is therefore passed on to the web or application server. At that point the attacker begins requesting large numbers of files/objects using HTTP GET. They are generally legitimate requests, there are just a lot of them. So many, in fact, that the server quickly becomes focused on responding to those requests and has a hard time responding to new, legitimate requests. When rate-limiting was used to stop this type of attack, the bad guys moved to using a distributed system of bots (zombies) to ensure that the requests (attack) was coming from myriad IP addresses and was therefore not only more difficult to detect, but more difficult to stop. The attacker uses malware and trojans to deposit a bot on servers and clients, and then remotely includes them in his attack by instructing the bots to request a list of objects from a specific site or server. The attacker might not use bots, but instead might gather enough evil friends to launch an attack against a site that has annoyed them for some reason. Layer 7 DoS attacks are more difficult to detect because the TCP connection is valid and so are the requests. The trick is to realize when there are multiple clients requesting large numbers of objects at the same time and to recognize that it is, in fact, an attack. This is tricky because there may very well be legitimate requests mixed in with the attack, which means a "deny all" philosophy will result in the very situation the attackers are trying to force: a denial of service. Defending against Layer 7 DoS attacks usually involves some sort of rate-shaping algorithm that watches clients and ensures that they request no more than a configurable number of objects per time period, usually measured in seconds or minutes. If the client requests more than the configurable number, the client's IP address is blacklisted for a specified time period and subsequent requests are denied until the address has been freed from the blacklist. Because this can still affect legitimate users, layer 7 firewall (application firewall) vendors are working on ways to get smarter about stopping layer 7 DoS attacks without affecting legitimate clients. It is a subtle dance and requires a bit more understanding of the application and its flow, but if implemented correctly it can improve the ability of such devices to detect and prevent layer 7 DoS attacks from reaching web and application servers and taking a site down. The goal of deploying an application firewall or proxy-based application delivery solution is to ensure the fast and secure delivery of an application. By preventing both layer 4 and layer 7 DoS attacks, such solutions allow servers to continue serving up applications without a degradation in performance caused by dealing with layer 4 or layer 7 attacks.20KViews0likes3CommentsThey’re Called Black Boxes Not Invisible Boxes
Infrastructure can be a black box only if its knobs and buttons are accessible I spent hours at Interop yesterday listening to folks talk about “infrastructure.” It’s a hot topic, to be sure, especially as it relates to cloud computing. After all, it’s a keyword in “Infrastructure as a Service.” The problem is that when most of people say “infrastructure” it appears what they really mean is “server” and that just isn’t accurate. If you haven’t been a data center lately there is a whole lot of other “stuff” that falls under the infrastructure moniker in a data center that isn’t a server. You might also have a firewall, anti-virus scanning solutions, a web application firewall, a Load balancer, WAN optimization solutions, identity management stores, routers, switches, storage arrays, a storage network, an application delivery network, and other networky-type devices. Oh there’s more than that but I can’t very well just list every possible solution that falls under the “infrastructure” umbrella or we’d never get to the point. In information technology and on the Internet, infrastructure is the physical hardware used to interconnect computers and users. Infrastructure includes the transmission media, including telephone lines, cable television lines, and satellites and antennas, and also the routers, aggregators, repeaters, and other devices that control transmission paths. Infrastructure also includes the software used to send, receive, and manage the signals that are transmitted. In some usages, infrastructure refers to interconnecting hardware and software and not to computers and other devices that are interconnected. However, to some information technology users, infrastructure is viewed as everything that supports the flow and processing of information. -- TechTarget definition of “infrastructure” The reason this is important to remember is that people continue to put forth the notion that cloud should be a “black box” with regards to infrastructure. Now in a general sense I agree with that sentiment but if – and only if – there is a mechanism to manage the resources and services provided by that “black boxed” infrastructure. For example, “servers” are infrastructure and today are very “black box” but every IaaS (Infrastructure as a Service) provider offers the means by which those resources can be managed and controlled by the customer. The hardware is the black box, not the software. The hardware becomes little more than a service. That needs to – nay, must extend to – the rest of the infrastructure. You know, the network infrastructure that is ultimately responsibly for delivering the applications that are being deployed on that black-box server infrastructure. The devices and services that interconnect users and applications. It simply isn’t enough to wave a hand at the network infrastructure and say “it doesn’t matter” because as a matter of fact it certainly does matter.206Views0likes1CommentService Virtualization Helps Localize Impact of Elastic Scalability
Service virtualization is the opposite of – and complementary implementation to – server virtualization. One of the biggest challenges with any implementation of elastic scalability as it relates to virtualization and cloud computing is managing that scalability at run-time and at design (configuration) time. The goal is to transparently scale out some service – network or application – in such a way as to eliminate the operational disruption often associated with scaling up (and down) efforts. Service virtualization allows virtually any service to be transparently scaled out with no negative impact to the service and, perhaps more importantly, to the applications and other services which rely upon that service.178Views0likes0Comments