san
11 TopicsHow can I configure Server SSL Profiles to connect to different URLs on the same server?
Hi, We have a web server which has two sites published on it via a single Virtual Server on the BIG-IP: site1.domain.uk site2.domain.uk Our security policy dictates that we must encrypt the connections between the user and the BIG-IP and between the BIG-IP and the web server. We initially purchased a SAN certificate with site1.domain.uk and site2.domain.uk on it (site1.domain.uk is the default name). We have tried various methods of getting the end to end connectivity working with a user connecting using both URLs but all have failed. Can anyone provide any guidance on how to achieve this?Solved2.1KViews0likes19Commentsmulti-domain with client-ssl profile set using SNI option
Hello, Currently, we are using SNI successfully, with single certificates. Now, we have a requirements which I don't know how to address : using multi-domains certificates (SAN) So, i have my default SNI multi-domain, which is easy to set, how can I set secondary SNI option? How can I configurred the server-name parameter with the extra URL inside the second multi-domain cert? VIP default SNI - multi-domain.cert second SNI cert - multi-domain2 server-name = ????? third SNI cert - multi-domain3 server-name = ????? thank you and be safe JSolved823Views0likes1Commentcharacter limit f5 subject alternative name
Guys I am having issue creating .csr in f5. Do we have limit on character for Subject Alternative names? we have 1111 characters including spaces on Subject alternative name however it has an error " error occurred while processing your request". But when I delete few domains about 2 it was successful :( Please help Thanks399Views0likes5CommentsAuthentication name in server ssl profile and SAN field
Hello In a SSL server profile, the FQDN name in the field 'Authenticate Name' is compare only to the CN field of the certificate ? Or the SAN (Subject Alternative Names) field of the certificate is also compared ? We have exchanges with a company actually presenting a certificate " *.company.com ". So actually, we authenticate the server with " *.company.com " in the Authenticate Name field of the SSL server profile. They will soon modify their certificate with CN " company.com " and put " *.company.com " in the SAN part of the certificate. How the SSL server profile will handle this ? SSL will fail because the CN of the certificate is not equal to the Authenticate Name field in the profile ? Or SSL will be ok because the SAN field handle a name equal to the Authenticate Name field of the profile ? Thank you. Fred314Views0likes1CommentSAN Certificate Troubleshooting
Hello, I have a SAN Certificate and installed to BIG IP TL2000.The certificate was imported as pfx but i also tried to convert and installed as .pem file to BIG IP. The problem is i can not use the Client SSL profile for this certificate. The certificate has 3 sundomains as 1.xyz.com,2.xyz.com,3.xyz.com Any help appreciated. Thank You202Views0likes1CommentStore Storing Stored? Or Blocked?
Now that Lori has her new HP TouchSmart for an upcoming holiday gift, we are finally digitizing our DVD collection. You would think that since our tastes are somewhat similar, we’d be good to go with a relatively small number of DVDs… We’re not. I’m a huge fan of well-done war movies and documentaries, we share history and fantasy interests, and she likes a pretty eclectic list of pop-culture movies, so the pile is pretty big. I’m working out how to store them all on the NAS such that we can play them on any TV on the network, and that got me to pondering the nature of storage access these days. We own a SAN, it never occurred to me to put these shows on it – that would limit access to those devices with an FC card… Or we’d end up creating a share to run them all through one machine with an FC card as a NAS head of sorts. In the long litany of different ways that we store things – direct attached or networked, cloud or WAN, Object store or hierarchical – the one that stands out as the most glaring, and the one that has traditionally gotten the most attention is file versus block. For at least a decade the argument has raged between which is more suited to enterprise use, while most of us have watched from the sidelines and been somewhat bemused by the conversation because the enterprise is using both. As a rule of thumb, if you need to boot from it or write sectors of data to it, you need block. Everything else is generally file. And that’s where I’m starting to wonder. I know there was a movement not too many years ago to make databases file based instead of block based, and that the big vendors were going in that direction, but I do wonder if maybe it’s time for block to retire at the OS level. Of course for old disks to be compatible, the OS would still have to handle block, but setting it to only allow OS-level calls (I know, it’s harder with each release, that’s death by a thousand cuts though) to read/write sectors would resolve much of the problem. Then a VMWare style boot-from-file-structure would resolve the last bit. Soon we could cut our file protocols in half. Seriously, at this point in time, what does block give us? Not much, actually. thin/auto provisioning is available on NAS, high-end performance tweaks are available on NAS, and the extensive secondary network (be it FC or IP) is not necessary for NAS, though there are some cases where throughput may demand it, those are not your everyday case in a world of 1 Gig networks with multi-Gig backplanes on most devices. And 10 Gig is available pretty readily these days. SAN has been slowly dying, I’m just pondering the question of whether it should be finished off. Seriously, people say “SAN is the only thing for high-performance!” but I can guarantee you that I can find plenty of NAS boxes that perform better than plenty of SAN networks – just a question of vendor and connectivity. I’m a big fan of iSCSI, but am no longer sure there’s a need for it out there. Our storage environment, as I’ve blogged before, has become horribly complex, with choices at every turn, many of which are more tied to vendor and profits than needs and customer desires. Strip away the marketing and I wonder if SAN has a use in the future of enterprise. I’m starting to think not, but I won’t declare it dead, as I am still laughing at those who declared tape dead for the last 20 years – and still are, regardless of what tape vendors’ sales look like. It would be hypocritical of me to laugh at them and make the same type of pronouncement. SAN will be dead when customers stop buying it, not before. Block will end when vendors stop supporting it, not before… So I really am just pondering the state of the market, playing devil’s advocate a bit. I have heard people proclaim that block is much faster for database access. I have written and optimized B-Tree code, and yeah, it is. But that’s because we write databases to work on blocks. If we used a different mechanism, we’d get a different result. It is no trivial thing to move to a different storage method, but if the DB already supports file access, the work is half done, only optimizing for the new method or introducing shims to make chunks of files look like blocks would be required. If you think about it, if your DB is running in a VM, this is already essentially the case. The VM is in a file, the DB is in that file… So though the DB might think it’s directly accessing disk blocks, it is not. Food for thought.186Views0likes0CommentsGiven Enough Standards, Define Anarchy
If a given nation independently developed twelve or fourteen governmental systems that all sat side-by-side and attempted to cooperate but never inter-operate, then anarchy would result. Not necessarily overnight, but issues about who is responsible for what, where a given function is best handled, and more would spring up nearly every day. Related Articles and Blogs: NEC’s New I/O Technology Enables Simultaneous Sharing of I/O Storage Area Networking Network Attached Storage SNIA (website) HP Flexfabric Gets Raves from Storage Networking Vendors177Views0likes0CommentsThe State of Storage is not the State of Your Storage
George Crump posted an interesting article over on Storage Switzerland that talks about the current state of the storage market from a protocol perspective. Interestingly to me, CIFS is specifically excluded from the conversation – NAS is featured, but the guts of the NAS bit only talks about NFS. In reality, NFS is a small percentage of the shared storage out there, since CIFS is built into Microsoft systems and is often used at the departmental or project level to keep storage costs down or to lighten the burden on the SAN. But now that I’ve nit-picked, it’s a relatively solid article. A little heavy on Brocade in the SAN section, but not so much that it takes away from the article. The real issue at hand is to determine what will work for you/your organization/projectX/whatever in the longer-term. Applications in enterprises tend to have a life of their own and just keep on going long after the designers and developers have moved off to other projects, other jobs, or sometimes even retirement. That’s a chunk of the reason that there are still so many mainframes out there. They weren’t as easy to kill as the distributed crowd (myself included) thought because they were the workhorses in the 70s and 80s, and those applications are still running today in many organizations. The same is going to be true in the enterprise. You can choose FCoE or even iSCSI, but they’re a bit higher risk than choosing FC or NAS, simply because FC and NAS are guaranteed to be around for a good long time, there are more than a handful of storage boxes running both. I personally feel that FCoE and iSCSI are safe at this point. They are not without their adherents, and there is a lot of competition for both, signifying vendor belief that needs will grow. But it is still a bigger risk than FC or NAS, for all the reasons stated above. There’s also the increasing complexity issue. Three of the IT shops I’ve worked in have tried major standardization efforts… None tried to standardize their storage protocol. But that day should be coming. You’re already living with one file-level and one block-level if you’re a mid-sized shop or larger, don’t make it worse unless you’re going to reap benefits that warrant further fragmenting how your storage is deployed. If you’re contemplating cloud computing, your storage is going to become more complex anyway. FCoE is your best option to limit that complexity – as eventually I suspect encrypted FCoE to take the cloud, since they can then put a SAN behind it and be done – but right now it’s just overhead and a new standard for your staff to learn. Certainly doesn’t look like Google Storage for Developers is FCoE compliant, and they’re the gorilla in that room at the moment. Knowing that you have a base of a given architecture, it is an acceptable choice to focus instead on improving the usage of that architecture and growing it for the time being with perhaps only a few pilot projects to explore your options and the capabilities of other technologies. As many times as Fiber Channel has been declared dead, I would not be surprised if you’re starting to get a bit sheepish about continuing to deploy it. But Mr. Crump is right, FC has inertia on its side. All that Fiber Channel isn’t going away unless something replaces it that is either close and familiar or so compelling that we’ll need the new functionality the replacement offers. Thus far that protocol has not appeared. The shared network thing hinders FCoE and iSCSI. Lots of people worry about putting this stuff on the same network as their applications, due to the congestion that could be created. But storage staff are not the people to create a dedicated Ethernet segment for your IP based storage either, so working with the network team becomes a requirement. Which I see as a good thing. The company has one IT group, they don’t care about the details. Imagine HR going “we don’t have a system for you to take time off, our compensation sub-team was unable to meet with the time accounting team”. Yeah, that’s the way it sounds when IT starts mumbling about network segments and cross functional problems. No one gets much past the “We don’t have…” part. I’m still and iSCSI fan-boy, even though the above doesn’t sound like it. I think it will take work to get the infrastructure right, considering half of the terms for an iSCSI network are not the standard fare of storage geeks. But to have everything on one network topology is a step toward having everything look and feel the same. The way that storage grew up, we naturally consider that SAN and NAS are two different beasts with two different sets of requirements and two different use cases. To the rest of the world, it is all just storage. And they’re (in general) right. So instead of looking at adding another protocol to the mix or changing your infrastructure, take a look at optimizations – HBAs are available for iSCSI if you need them (and the more virtualized, the more likely they are to be needed), your FC network could probably use a speed boost, and they’re constantly working on the next larger speed (Mr. Crump says 16 Gb is on the way… Astounding), FCoE converged adapters do much the same thing as iSCSI HBAs, but also handle IP traffic at 10 Gb. And 10 Gb will help your NAS too… Assuming said NAS can utilize it or the switch was your bottleneck anyway. Tiering products like our ARX can relieve pressure points on your network behind your back, following rules you have set, FC has virtualization tools that can help FC do the same, though they’re more complex should you ever lose the virtualization product. As Mr. Crump pointed out in other Storage Switzerland articles, adding an SSD tier can speed applications without a major network overhaul… And for all of these technologies, more disk is always an option. Something like the Dell EqualLogic series can even suck in an entire new array and add it to a partition without you having to do much more than say “yes, yes, this is the partition I want to grow”. Throw in the emerging SSD market for ultra-high-speed access, and well, major changes in protocol are not required. So moving forward, paying attention to the market is important, but as always, paying attention to what’s in your data center is more important. The days of implementing “cool new technology” just because it is “cool new technology” are long, long gone for most of us. More on that in another blog though. Related Articles and Blogs SSD is the new Green FCoE on Wikipedia Microsoft’s iSCSI Users Guide (MS-Word) FCIA Advanced Case Studies Other Storage blogs by me177Views0likes0CommentsThe Problem With Storage Growth is That No One Is Minding the Store
In late 2008, IDC predicted more than 61% Annual Growth Rate for unstructured data in traditional data centers through 2012. The numbers appear to hold up thus far, perhaps were even conservative. This was one of the first reports to include the growth from cloud storage providers in their numbers, and that particular group was showing a much higher rate of growth – understandable since they have to turn up the storage they’re going to resell. The update to this document titled World Wide Enterprise Systems Storage Forecast published in April of this year shows that even in light of the recent financial troubles, storage space is continuing to grow. Related Articles and Blogs Unstructured Data Will Become the Primary Task for Storage Our Storage Growth (good example of someone who can’t do the above) Tiered Storage Tames Data Storage Growth says Construction CIO Data Deduplication Market Driven by Storage Growth Tiering is Like Tables or Storing in the Cloud Tier176Views0likes0CommentsTaking the Final Server Virtualization Steps
There is a trend in the high-tech industry to jump from one hot technology to another, without waiting for customers to catch up. We’re certainly seeing it with Cloud, there are people out there pushing the “everyone else is doing it and gaining agility!” button every day. But you’re not there yet. Part of the reason you’re not there yet is that virtualization is still growing up. Between VM sprawl, resource over-utilization, virtual versus physical infrastructure, and the inherent task of IT to continue to support the business as it sits today, there isn’t a ton of time left for hopping on the Cloud bandwagon. And some of these things – VM Sprawl and resource over-utilization for example – counter-indicate a move to Cloud, simply because they are situations that will cost you money if you do them on a platform that charges you by the rate of transfer or number of VMs. As Lori so aptly put it in one of her blogs, if you can’t manage it internally, you can’t manage it externally either. Related Articles and Blogs Virtual Sprawl is Not the Real Problem The Virtual Virtualization Case Studay Is VM Stall The Next Big Virtualization Challenge The Best Virtualization Joke Ever (no, it really is a joke) Virtualization’s Downsides Virtualization Planning: 4 Systems Management Keys to Success162Views0likes0Comments