platform
12 TopicsF5 platform EoSD and Software EoSD is different. Which one need to use?
Hi We use F5 2000s with version 15.1.x From platform EoSD, It's already expired but from software EoSD. it's still available question is Which one should I use? If i use WAF and I believe attack signature will release if EoSD is still available...... question is right now will my attack signature still update? 2 what is different?1.6KViews1like3CommentsPatch for CVE-2020-5902 for old HW Plattform
Hi, We have a old Plattform BIG-IP 3600 with the lastest SW-Version 12.1.2.2.0.276-HF and isn't possible according to the matrix HW matrix (https://support.f5.com/csp/article/K9476) to changing the SW-Version. Is there a separate patch planned for this version? On the side https://support.f5.com/csp/article/K52145254 find the column "Versions known to be vulnerable" with entry "12.1.0 - 12.1.5". How can i patch the system without the HW Change? Is there another possibility? BR RobertSolved410Views0likes1CommentCan current 2000s H.A with i2600?
Dear all, Although current platforms are not EOS yet, according to past trends, the 2000s should be announcing EOS sometime this year. (My suspicions only, no official statements from F5) The question is: Some company insisted on buying 2000s today because of serious budget concerns. Later end of the year or 2 years later, they would like to buy i2000 series considering 2000s will be EOS by then. Can these two different platforms do a HA via network? Both 2000 and iSeries support 12.1.1 so in near future for software, may not be much of an issue. The predicted End of New Software Support for 2000s could be in 2019.277Views0likes1CommentThe Future of Cloud: Infrastructure as a Platform
Cloud needs to become a platform, and that means its comprising infrastructure must also embrace the platform paradigm. There’s been a spate of articles, blogs, and mentions of OpenFlow in the past few months. IBM was the latest entry into the OpenFlow game, releasing an enabling RackSwitch G8264, an update of a 64-port, 10 Gigabit Ethernet switch IBM put out a year ago. Interest in the specification appears to be growing and not just because it’s got the prefix-du-jour as part of its name, implying everything to everyone – free, extensible, interoperable, etc… While all those modifiers are indeed interesting and, to some, a highly important facet of the would-be standard, there’s something else about it that is driving its popularity. That something-else can be summed it with the statement: “infrastructure as a platform.” THE WEB 2.0 LESSON. AGAIN. The importance of turning infrastructure into a platform can be evidenced by noting commentary on Web 2.0, a.k.a. social networking, applications and their failure/success to garner mind-share. Recently, a high-profile engineer at Google mistakenly posted a length and refreshingly blunt commentary on what he views as Google’s failure to recognize the importance of platform to successful offerings in today’s demanding marketplace. To Google’s credit, once the erroneous posting was discovered, it decided to “let it stand” and thus we are able to glean some insight about the importance of platform to today’s successful offerings: While Yegge doesn’t have a lot of good things to say about Amazon and its founder Jeff Bezos, he does note that Bezos – unlike Google – understands that its not just about developing interesting products, but that it takes a platform to create a great product. -- SiliconFilter, “Google Engineer: “Google+ is a Prime Example of Our Complete Failure to Understand Platforms” This insight is not restricted to software developers and engineers at all; the rising interest of PaaS (Platform as a Service) and the continued siren’s song that it will dominate the cloud landscape in the future is all tied to the same premise: it is the availability of a robust platform that makes or breaks solutions today, not features or functions or price. It is the ability to be successful by building, as Yegge says in his post, “an entire constellation of products by allowing other people to do the work.” Lest you think this concept applicable only to software, let me remind you of Nokia CEO Stephen Elop’s somewhat blunt assessment of his company’s failure to recognize this truth: The battle of devices has now become a war of ecosystems, where ecosystems include not only the hardware and software of the device, but developers, applications, ecommerce, advertising, search, social applications, location-based services, unified communications and many other things. Our competitors aren’t taking our market share with devices; they are taking our market share with an entire ecosystem. This means we’re going to have to decide how we either build, catalyse or join an ecosystem. -- DevCentral F5 Friday, “A War of Ecosystems” Interestingly, 47% of respondents surveyed by Zenoss/Cloud.com for its Cloud Computing Outlook 2011 indicated use of PaaS in 2011. Like SaaS, PaaS has some wiggle room in its definition, but its general popularity seems to indicate that yes, indeed, platform is an important factor. OpenFlow essentially provides this capability, turning infrastructure into a platform and enabling extensibility and customization that could not be achieved otherwise. It basically turns a piece of infrastructure into a giant backplane for new functions, features, and services. It introduces, allegedly, dynamism into what is typically a static network. It is what IaaS had the promise to be, but as of yet has failed to achieve. CLOUD as a PLATFORM The takeaway for cloud and infrastructure providers is that organizations want platforms. Developers want platforms. Operations wants platforms (see Puppet and Chef as examples of operational platforms). It’s about enabling an ecosystem that encourages innovation, i.e. new features and functions and services, without requiring the wheel to be reinvented. It’s about drag and drop, figuratively speaking, in the realm of infrastructure. Bringing the ability to deploy new services atop a platform that provides the basics. OpenFlow promises just such capabilities for infrastructure much in the same way Facebook provides these basics for game and application developers. Mobile platforms offer the same for devices and operating systems. It’s about enabling an ecosystem in which organizations can focus on not the core infrastructure, but on custom functionality and process automation that delivers efficiency to IT across operations and development alike. “The beauty of this is it gives more flexibility and control to the network,” said Shaughnessy [marketing manager for system networking at IBM], “so you could actually adjust the way the traffic flows go through your network dynamically based on what’s going on with your applications.” -- IBM releases OpenFlow-enabled switch It enables flexibility in the network, the means to deploy more dynamism in traffic policy enforcement and shaping and ties back to cloud with its ability to impart multi-tenant capabilities to infrastructure without completely modifying the internal architecture of components – a major obstacle for many network-focused devices. OpenFlow is not a panacea, there are myriad reasons why it may not be appropriate as the basis for architecting the cloud platform foundation required to support future initiatives. But it is a prime example of the kind of platform-focused capabilities organizations desire to move ahead in their journey to IT as a Service. The cloud on which organizations will be able to build their future data center architecture will be a platform, and that means from the bottom (infrastructure) to the middle (development) to the top (operations). What cloud and infrastructure providers must do is simulate the Facebook experience at the infrastructure layer. Infrastructure as a platform is the next step in the evolution of cloud computing . IT Services: Creating Commodities out of Complexity IBM releases OpenFlow-enabled switch The Cloud Configuration Management Conundrum IT as a Service: A Stateless Infrastructure Architecture Model If a Network Can’t Go Virtual Then Virtual Must Come to the Network You Can’t Have IT as a Service Until IT Has Infrastructure as a Service This is Why We Can’t Have Nice Things WILS: Automation versus Orchestration The Infrastructure Turk: Lessons in Services Putting the Cloud Before the Horse238Views0likes0Comments業界唯一のシャーシ型ADCであるViprionシリーズの最小モデル、C2200シャーシを提供開始
このたび、F5ネットワークスジャパン株式会社は、F5 Synthesisアーキテクチャモデルの恩恵を増大する新製品、Viprionシリーズの新モデル、2スロット式小型シャーシであるC2200を発表いたしました。C2200は、従来のミッドレンジであるC2400、上位モデルのC4480、フラッグシップのC4800に加え、小型で省スペース、お求めやすい価格設定で従来と変わらない機能をお届けします。 主なキーポイントは以下の通りです。 Viprionシリーズ最小の2RU(ラックユニット)というサイズ 対応するブレードは最新のミッドレンジブレードであるB2150 / B2250 最大ブレード2枚搭載可能。つまり最大40のvCMP仮想インスタンスを構築可能 対応ソフトウェア(TMOS)のバージョンは11.5.0以降 詳細な情報はViprion製品ページをご参照下さい。スペックを含めたデータシートやプラットフォーム一覧表などもございます。 Viprion C2200では、システムをユーザの必要に応じてアップグレードする能力を保ちながら、スケーリング可能な処理力を加えることが可能となり、企業にとって重要なアプリケーションサービスのパフォーマンスとスケーリングの両方を実現します。F5の仮想クラスタ・マルチプロセシング(vCMP ® )テクノロジを用いて、アプリケーションサービスと十分に活用されていないアプリケーション・デリバリ・コントローラ(ADC)を効率的に統合させ、最高密度のマルチテナントソリューションを提供いたします。 今までも、そこまでインフラの拡張が大きく見込まれないユーザ様環境では、従来機のViprionで最大4枚・8枚という中1-2枚程度で運用が続いている事例も数多くございます。このように、より小規模なキャパシティプランニングをされているユーザ様向けにも拡張性、仮想化ソリューションを展開し、より小型で少ない投資から始める事ができる、というご提案が可能になります。新しいViprion C2200を是非ご検討下さい! 出荷体制は整っております。製品に関する詳しい情報に関しては、F5ネットワークスジャパン株式会社(https://interact.f5.com/JP-Contact.html)、または各販売代理店までご連絡ください。233Views0likes0CommentsIs PaaS Just Outsourced Application Server Platforms?
There’s a growing focus on PaaS (Platform as a Service), particularly as Microsoft has been rolling out Azure and VMware continues to push forward with its SpringSource acquisition. Amazon, though generally labeled as IaaS (Infrastructure as a Service) is also a “player” with its SimpleDB and SQS (Simple Queue Service) and more recently, its SNS (Simple Notification Service). But there’s also Force.com, the SaaS (Software as a Service) giant Salesforce.com’s incarnation of a “platform” as well as Google’s App Engine. As is the case with “cloud” in general, the definition of PaaS is varied and depends entirely on to whom you’re speaking at the moment. What’s interesting about SpringSource and Azure and many other PaaS offerings is that as far as the customer is concerned they’re very much like an application server platform. The biggest difference being, of course, that the customer need not concern themselves with the underlying management and scalability. The application however, is still the customer’s problem. That’s not that dissimilar from what enterprise-class organizations build out in their own data centers using traditional application server platforms like .NET and JavaEE. The application server platform is, well, a platform, in which multiple applications are deployed in their own cozy little isolated containers. You might even recall that JavaEE containers are called, yeah, “virtual machines.” And even though Force.com and Google App Engine are proprietary platforms (and generally unavailable for deployment elsewhere) they still bear many of the characteristic marks of an application server platform.227Views0likes0CommentsThe Three Axioms of Application Delivery
#fasterapp If you know these three axioms, then you’ll know application delivery when you see it. Like most technology jargon, there are certain terms and phrases that end up mangled, conflated, and generally misapplied as they gain traction in the wider market. Cloud is merely the latest incarnation of this phenomenon, and there will be others in the future. Guaranteed. Of late the term “application delivery” has been creeping up into the vernacular. That could be because cloud has pushed it to the fore, necessarily. Cloud purports to eliminate the “concern” of infrastructure and allows IT to focus on … you guessed it, the application. Which in turn means the delivery of applications is becoming more and more pervasive in the strategic vocabulary of the market. But like cloud and its predecessors, the term application delivery is somewhat vague and without definition. I am not going to define it, in case you were wondering, because quite frankly I’ve watched its expansion and transformation over the past decade and understand that application delivery is not static. As new technology and deployment models arise, new techniques and architectures must also arise to meet the challenges that naturally arise along with those applications. But how, then, do you know what is and is not application delivery? If it can morph and grow and transform with time and technology, then anything can be considered application delivery, right? Not entirely. Application delivery, after all, is about an end-to-end process. It’s about a request that is sent to an application and subsequently fulfilled and returned to the originator of the request. Depending on the application this process may be simple or exceedingly complex, requiring authentication, logging, verification, interaction of multiple services and, one hopes, a wealth of security services ensuring that what is delivered is what was intended and desired, and is not carrying along something malicious. A definition comprising these concepts would be either be far too broad so as to be meaningless, or so narrow that it left no room to adapt to future technologies. Neither is acceptable, in my opinion. A much better way to understand what is (and conversely what is not) application delivery is to learn three simple axioms that define the core concepts upon which application delivery is based. APPLICATION-CENTRIC “Applications are not servers, hypervisors, or operating systems.” Applications are not servers. They are not the physical or virtual server upon which they are deployed and from where they draw core resources. They are not the web and application servers on which they rely for application-layer protocol support. They are not the network stack from which they derive their IP address or TCP connection characteristics. They are uniquely separate entities that must be managed individually. The concrete example of this axiom in action is health-monitoring of applications. Too many times we see load balancing services configured with health-checking options that are focused on IP or TCP or HTTP parameters. Ping checks, TCP half-open checks, HTTP status checks. None of these options are relevant to whether or not the application is available and executing correctly. A ping check assures us the network is operating and the OS is responding. A TCP half-open check tells us network stack is operating properly. An HTTP status check tells us the web or application server is running and accepting requests. But none of these even touches on whether or not the application is executing and responding correctly. Similarly, applications are not ports, and security services must be able to secure the application, not merely its operating environment. Applications are not – or should not – be defined by their network characteristics, and neither should they be secured based on these parameters. Applications are not servers, hypervisors, or operating systems. They are individual entities that must be managed individually, from a performance, availability, and security perspective. MITIGATE OPERATIONAL RISK “Availability, performance, and security are not separate operational challenges.” In most IT organizations the people responsible for security are not responsible for performance or availability, and vice-versa. While devops tries to bridge the gap between applications and operations-focused professionals, we may need to intervene first and unify operations. These three operational concerns are intertwined, they are interrelated, they are paternal triplets. A DDoS attack is security, but it has – or likely will have – a profound impact on both performance and availability. Availability has an impact on performance, both positive and negative. And too often performance concerns result in the avoidance of security that can ultimately return to bite availability in the derriere. Application delivery recognizes that all three components of operational risk are inseparable, and they must be viewed as a holistic concern. Each challenge should be addressed with the others in mind, and with the understanding that changes in one will impact the others. OPERATE WITHIN CONTEXT “Application delivery decisions cannot be made efficiently or effectively in a vacuum.” Finally, application delivery recognizes that decisions regarding application performance, security, and availability cannot be made within a vacuum. What may improve performance for a mobile client accessing an application over the Internet may actually impair performance for a mobile client accessing the application over the internal data center network. What is appropriate authentication methods for a remote PC desktop are unlikely to be applicable to the same user requesting access over a smartphone. The various components of context provide the means by which the appropriate policies are enforced and applied at the right time to the right client for the right application. It is context that provides the unique set of parameters that enfolds any given request. We cannot base decisions solely on user, because user may migrate during the day from one client device to another, and one location to another. We cannot base decisions solely on device, because network conditions and type may change as the user roams from home to the office and out to lunch, moving seamlessly between mobile carrier network and WiFi. We cannot base decisions solely on application, because the means and location of the client may change its behavior and impact delivery in a negative way. When you put these axioms into action, the result is application delivery. A comprehensive, holistic and highly strategic approach to delivering applications. It is impossible to say application delivery is these five products delivered as a solution because whether or not those products actually comprise an application delivery network depends on whether or not they are able to deliver on the promise of these three axioms of application delivery.224Views0likes0Comments業界唯一のシャーシ型ADCであるViprionシリーズの最小モデル、C2200シャーシを提供開始
このたび、F5ネットワークスジャパン株式会社は、F5 Synthesisアーキテクチャモデルの恩恵を増大する新製品、Viprionシリーズの新モデル、2スロット式小型シャーシであるC2200を発表いたしました。C2200は、従来のミッドレンジであるC2400、上位モデルのC4480、フラッグシップのC4800に加え、小型で省スペース、お求めやすい価格設定で従来と変わらない機能をお届けします。 主なキーポイントは以下の通りです。 Viprionシリーズ最小の2RU(ラックユニット)というサイズ 対応するブレードは最新のミッドレンジブレードであるB2150 / B2250 最大ブレード2枚搭載可能。つまり最大40のvCMP仮想インスタンスを構築可能 対応ソフトウェア(TMOS)のバージョンは11.5.0以降 詳細な情報はViprion製品ページをご参照下さい。スペックを含めたデータシートやプラットフォーム一覧表などもございます。 Viprion C2200では、システムをユーザの必要に応じてアップグレードする能力を保ちながら、スケーリング可能な処理力を加えることが可能となり、企業にとって重要なアプリケーションサービスのパフォーマンスとスケーリングの両方を実現します。F5の仮想クラスタ・マルチプロセシング(vCMP ® )テクノロジを用いて、アプリケーションサービスと十分に活用されていないアプリケーション・デリバリ・コントローラ(ADC)を効率的に統合させ、最高密度のマルチテナントソリューションを提供いたします。 今までも、そこまでインフラの拡張が大きく見込まれないユーザ様環境では、従来機のViprionで最大4枚・8枚という中1-2枚程度で運用が続いている事例も数多くございます。このように、より小規模なキャパシティプランニングをされているユーザ様向けにも拡張性、仮想化ソリューションを展開し、より小型で少ない投資から始める事ができる、というご提案が可能になります。新しいViprion C2200を是非ご検討下さい! 出荷体制は整っております。製品に関する詳しい情報に関しては、F5ネットワークスジャパン株式会社(https://interact.f5.com/JP-Contact.html)、または各販売代理店までご連絡ください。206Views0likes0CommentsRed Herring: Hardware versus Services
In a service-focused, platform-based infrastructure offering, the form factor is irrelevant. One of the most difficult aspects of cloud, virtualization, and the rise of platform-oriented data centers is the separation of services from their implementation. This is SOA applied to infrastructure, and it is for some reason a foreign concept to most operational IT folks – with the sometimes exception of developers. But sometimes even developers are challenged by the notion, especially when it begins to include network hardware. ARE YOU SERIOUSLY? The headline read: WAN Optimization Hardware versus WAN Optimization Services. I read no further, because I was struck by the wrongness of the declaration in the first place. I’m certain if I had read the entire piece I would have found it focused on the operational and financial benefits of leveraging WAN optimization as a Service as opposed to deploying hardware (or software a la virtual network appliances) in multiple locations. And while I’ve got a few things to say about that, too, today is not the day for that debate. Today is for focusing on the core premise of the headline: that hardware and services are somehow at odds. Today is for exposing the fallacy of a premise that is part of the larger transformational challenge with which IT organizations are faced as they journey toward IT as a Service and a dynamic data center. This transformational challenge, often made reference to by cloud and virtualization experts, is one that requires a change in thinking as well as culture. It requires a shift from thinking of solutions as boxes with plugs and ports and viewing them as services with interfaces and APIs. It does not matter one whit whether those services are implemented using hardware or software (or perhaps even a combination of the two, a la a hybrid infrastructure model). What does matter is the interface, the API, the accessibility as Google’s Steve Yegge emphatically put it in his recent from-the-gut-not-meant-to-be-public rant. What matters is that a product is also a platform, because as Yegge so insightfully noted: A product is useless without a platform, or more precisely and accurately, a platform-less product will always be replaced by an equivalent platform-ized product. A platform is accessible, it has APIs and interfaces via which developers (consumer, partner, customer) can access the functions and features of the product (services) to integrate, instruct, and automate in a more agile, dynamic architecture. Which brings us back to the red herring known generally as “hardware versus services.” HARDWARE is FORM-FACTOR. SERVICE is INTERFACE. This misstatement implies that hardware is incapable of delivering services. This is simply not true, any more than a statement implying software is capable of delivering services would be true. That’s because intrinsically nothing is actually a service – unless it is enabled to do so. Unless it is, as today’s vernacular is wont to say, a platform. Delivering X as a service can be achieved via hardware as well as software. One need only look at the varied offerings of load balancing services by cloud providers to understand that both hardware and software can be service-enabled with equal alacrity, if not unequal results in features and functionality. As long as the underlying platform provides the means by which services and their requisite interfaces can be created, the distinction between hardware and “services” is non-existent. The definition of “service” does not include nor preclude the use of hardware as the underlying implementation. Indeed, the value of a “service” is that it provides a consistent interface that abstracts (and therefore insulates) the service consumer from the underlying implementation. A true “service” ensures minimal disruption as well as continued compatibility in the face of upgrade/enhancement cycles. It provides flexibility and decreases the risk of lock-in to any given solution, because the implementation can be completely changed without requiring significant changes to the interface. This is the transformational challenge that IT faces: to stop thinking of solutions in terms of deployment form-factors and instead start looking at them with an eye toward the services they provide. Because ultimately IT needs to offer them “as a service” (which is a delivery and deployment model, not a form factor) to achieve the push-button IT envisioned by the term “IT as a Service.”199Views0likes0Comments