linerate
85 TopicsNode.js at 100K+ requests per second
Folks that work with Node.js are familiar with the limitations it has. Single threaded nature make Node highly dependent on single CPU performance, which seems to be the main limitation for throughput. Achieving 10K HTTP requests per second is not unheard of, and a barely noticeable breeze in the internet traffic world. Today’s sites are demanding high throughput and spinning new server instances can be wasteful. LineRate offers several ways to optimize your WebApp and resources for high throughput. LineRate is a high performance Application Proxy with full programmable access to HTTP data path in form of a Scripting API that utilizes Node.js. LineRate has all enterprise grade Load Balancer features such as multiple load balancing algorithms to choose from, connection and session persistence, and SSL offloading. LineRate optimizes available resources to achieve high throughput and 100K+ requests per second. The scripting API enables developers to reuse their Node.js code and offload simple processing (such as inspecting or rewriting theheaders or body of HTTP requests)onto a single LineRate instance. For each HTTP request received, Node.js script code can inspect and modify the request before forwarding it to the web server. Compute intensive tasks such as authentication or virus scanning could be offloaded to external services in a non-blocking fashion. To get started visit linerate.f5.com.1.6KViews0likes0CommentsMicroservices and HTTP/2
It's all about that architecture. There's a lot of things we do to improve the performance of web and mobile applications. We use caching. We use compression. We offload security (SSL and TLS) to a proxy with greater compute capacity. We apply image optimization and minification to content. We do all that because performance is king. Failure to perform can be, for many businesses, equivalent to an outage with increased abandonment rates and angry customers taking to the Internet to express their extreme displeasure. The recently official HTTP/2 specification takes performance very seriously, and introduced a variety of key components designed specifically to address the need for speed. One of these was to base the newest version of the Internet's lingua franca on SPDY. One of the impacts of this decision is that connections between the client (whether tethered or mobile) and the app (whether in the cloud or on big-iron) are limited to just one. One TCP connection per app. That's a huge divergence from HTTP/1 where it was typical to open 2, 4 or 6 TCP connections per site in order to take advantage of broadband. And it worked for the most part because, well, broadband. So it wouldn't be a surprise if someone interprets that ONE connection per app limitation to be a negative in terms of app performance. There are, of course, a number of changes in the way HTTP/2 communicates over that single connection that ultimately should counteract any potential negative impact on performance from the reduction in TCP connections. The elimination of the overhead of multiple DNS lookups (not insignificant, by the way) as well as TCP-related impacts from slow start and session setup as well as a more forgiving exchange of frames under the covers is certainly a boon in terms of application performance. The ability to just push multiple responses to the client without having to play the HTTP acknowledgement game is significant in that it eliminates one of the biggest performance inhibitors of the web: latency arising from too many round trips. We've (as in the corporate We) seen gains of 2-3 times the performance of HTTP/1 with HTTP/2 during testing. And we aren't alone; there's plenty of performance testing going on out there, on the Internets, that are showing similar improvements. Which is why it's important (very important) that we not undo all the gains of HTTP/2 with an architecture that mimics the behavior (and performance) of HTTP/1. Domain Sharding and Microservices Before we jump into microservices, we should review domain sharding because the concept is important when we look at how microservices are actually consumed and delivered from an HTTP point of view. Scalability patterns (i.e. architectures) include the notion of Y-axis scale which is a sharding-based pattern. That is, it creates individual scalability domains (or clusters, if you prefer) based on some identifiable characteristic in the request. User identification (often extricated from an HTTP cookie) and URL are commonly used information upon which to shard requests and distribute them to achieve greater scalability. An incarnation of the Y-axis scaling pattern is domain sharding. Domain sharding, for the uninitiated, is the practice of distributing content to a variety of different host names within a domain. This technique was (and probably still is) very common to overcome connection limitations imposed by HTTP/1 and its supporting browsers. You can see evidence of domain sharding when a web site uses images.example.com and scripts.example.com and static.example.com to optimize page or application load time. Connection limitations were by host (origin server), not domain, so this technique was invaluable in achieving greater parallelization of data transfers that made it appear, at least, that pages were loading more quickly. Which made everyone happy. Until mobile came along. Then we suddenly began to realize the detrimental impact of introducing all that extra latency (every connection requires a DNS lookup, a TCP handshake, and suffers the performance impacts of TCP slow start) on a device with much more limited processing (and network) capability. I'm not going to detail the impact; if you want to read about it in more detail I recommend reading some material from Steve Souder and Tom Daly or Mobify on the subject. Suffice to say, domain sharding has an impact on mobile performance, and it is rarely a positive one. You might think, well, HTTP/2 is coming and all that's behind us now. Except it isn't. Microservice architectures in theory, if not in practice, are ultimately a sharding-based application architecture that, if we're not careful, can translate into a domain sharding-based network architecture that ultimately negates any of the performance gains realized by adopting HTTP/2. That means the architectural approach you (that's you, ops) adopt to delivering microservices can have a profound impact on the performance of applications composed from those services. The danger is not that each service will be its on (isolated and localized) "domain", because that's the whole point of microservices in the first place. The danger is that those isolated domains will be presented to the outside world as individual, isolated domains, each requiring their own personal, private connection by clients. Even if we assume there are load balancing services in front of each service (a good assumption at this point) that still means direct connections between the client and each of the services used by the client application because the load balancing service acts as a virtual service, but does not eliminate the isolation. Each one is still its own "domain" in the sense that it requires a separate, dedicated TCP connection. This is essentially the same thing as domain sharding as each host requires its own IP address to which the client can connect, and its behavior is counterproductive to HTTP/2*. What we need to do to continue the benefits of a single, optimized TCP connection while being able to shard the back end is to architect a different solution in the "big black box" that is the network. To be precise, we need to take advantage of the advanced capabilities of a proxy-based load balancing service rather than a simple load balancer. An HTTP/2 Enabling Network Architecture for Microservices That means we need to enable a single connection between the client and the server and then utilize capabilities like Y-axis sharding (content switching, L7 load balancing, etc...) in "the network" to maintain the performance benefits of HTTP/2 to the client while enabling all the operational and development benefits of a microservices architecture. What we can do is insert a layer 7 load balancer between the client and the local microservice load balancers. The connection on the client side maintains a single connection in the manner specified (and preferred) by HTTP/2 and requires only a single DNS lookup, one TCP session start up, and incurs the penalties from TCP slow start only once. On the service side, the layer 7 load balancer also maintains persistent connections to the local, domain load balancing services which also reduces the impact of session management on performance. Each of the local, domain load balancing services can be optimized to best distribute requests for each service. Each maintains its own algorithm and monitoring configurations which are unique to the service to ensure optimal performance. This architecture is only minimally different from the default, but the insertion of a layer 7 load balancer capable of routing application requests based on a variety of HTTP variables (such as the cookies used for persistence or to extract user IDs or the unique verb or noun associated with a service from the URL of a RESTful API call) results in a network architecture that closely maintains the intention of HTTP/2 without requiring significant changes to a microservice based application architecture. Essentially, we're combining X- and Y-axis scalability patterns to architect a collaborative operational architecture capable of scaling and supporting microservices without compromising on the technical aspects of HTTP/2 that were introduced to improve performance, particularly for mobile applications. Technically speaking we're still doing sharding, but we're doing it inside the network and without breaking the one TCP connection per app specified by HTTP/2. Which means you get the best of both worlds - performance and efficiency. Why DevOps Matters The impact of new architectures - like microservices - on the network and the resources (infrastructure) that deliver those services is not always evident to developers or even ops. That's one of the reasons DevOps as a cultural force within IT is critical; because it engenders a breaking down of the isolated silos between ops groups that exist (all four of them) and enables greater collaboration that leads to more efficient deployment, yes, but also more efficient implementations. Implementations that don't necessarily cause performance problems that require disruptive modification to applications or services. Collaboration in the design and architectural phases will go along way towards improving not only the efficacy of the deployment pipeline but the performance and efficiency of applications across the entire operational spectrum. * It's not good for HTTP/1, either, as in this scenario there is essentially no difference** between HTTP/1 and HTTP/2. ** In terms of network impact. HTTP/2 still receives benefits from its native header compression and other performance benefits.1.5KViews0likes2CommentsNode.js HTTP Message Body: Transfer-Encoding and Content-Length in LineRate lrs/virtualServer Module
Summary: (refer to AskF5 solution article 15884) In the LineRate scripting environment we consult the Content-Length header of HTTP messages entering the system (see the LineRate Developer Scripting Guide under the topic "Writing Past the Content-Length"). Therefore when performing transformations to an HTTP message body, be it a request or response, we must ensure that if a Content-Length header is present in the original message we send accurate information reflecting our changes in the message we transmit to the end host. Full Story: When I first began investigating what node.js was I stumbled upon a video presentation by Ryan Dahl (node's creator) from 2011. If you fast-forward to about 14:00 in the video, Ryan is talking about the node HTTP module. One of the features Ryan mentions is that the HTTP module's “server” object provides HTTP/1.1 responses. Along with the Keep-Alive header, the node HTTP server automatically "chunks" the response; that is, the server will automatically apply the "Transfer-Encoding: chunked" header to a response. Ryan explains that this enables node to automatically handle variable length responses. This also enables you to call the write() method multiple times on an active HTTP response object without prior knowledge of the total length of the response you want to send. This is a great feature when you know all of your clients will be fully HTTP/1.1 compliant. The LineRate application proxy’s scripting API provides for manipulating the HTTP data path, and when acting as part of a full-proxy architecture the data path requires the management of both HTTP server functionality as well as an HTTP client functionality. This enables users of LineRate to make transformations to HTTP responses that originate from other servers ("real-servers" in LROS configuration terms). Since LineRate stands between the origin servers and the HTTP clients we must take care to preserve application behavior, regardless of the version of HTTP the server provides. Herein lies one of the differences in the design of the LineRate lrs/virtualServer module and the node HTTP module's server objects. Bringing the previous paragraphs together: when you apply content transformations to HTTP message bodies using the lrs/virtualServer module, you must be conscious of the original message's framing method; that is, does the message use "Transfer-Encoding: chunked", or does it use a "Content-Length:" header value to delineate boundaries of HTTP message bodies? The LineRate Scripting Developer Guide addresses this issue, and it is briefly outlined in a recent AskF5 solution article: Overview of Node.js auto-chunking behavior on LineRate systems. In order to better understand this behavior I built a small node module that appends some chosen content to the end of an HTTP response: This script makes use of the Node.js 'Modules' API, where it adds the object name “ResponseAppender” to the "exports" namespace. In order to use this script as a module on your LineRate system simply copy this code as a file to the directory /home/linerate/data/scripting/, and then 'require' the module from within an in-line script where we call 'new' on the call to the exported function, passing three parameters: the virtual server we want to use the module with as the first parameter, an arbitrary string as the second parameter, and a boolean as the third parameter ('true' means we correct the value of the Content-Length to the final response if present, 'false' means we leave the headers exactly as the real-server provided them). Here is an example of a partial LROS configuration with a virtual server and inline script that makes use of this module (assuming the above module file is named 'response_appender.js'): With the above configuration objects (and assuming "test-vip" and "some-server" refer to appropriate corresponding configuration objects) we can easily toggle the "FIX_HEADER" value from the LineRate Manager web GUI, or by using the CLI from the configuration context with the command: LROS(config-script:appender)# source edit vim ## or LROS(config-script:appender)# source edit emacs Once this scenario is configured we should be able to observe the following: CASE #1: FIX_HEADER == false, and the real-server provides a Content-Length header Here we would expect to see no change in a browser when we receive the response from the virtual server. Because the Content-Length header is preserved form the original response the browser stops short of reading the appended content. However, if we use a tool like tcpdump or wireshark to capture the network stream, we can actually see the appended string being sent over the wire. CASE #2: FIX_HEADER == true, and the real-server provides a Content-Length header This time because FIX-HEADER is true we actually calculate a new Content-Length value that includes the byte count of the appended string. Now if we make a request to the test-vip with a browser we expect to the the pop-up text "SURPRISE!" because now the browser knows to read the additional bytes of the appended string. CASE #3: the real-server provides the response with Transfer-Encoding: chunked or TE: chunked: In this case nothing changes. We can observe that LineRate behaves exactly like the node HTTP server objects, where any content written to the response stream is properly framed as an HTTP/1.1 chunk. We expect the response to result in a browser displaying the "SURPRISE!" pop-up message. Now that you know how it works, download your own copy of LROS, get a free tier license and try it yourself!1.1KViews0likes0CommentsEnforcing HSTS (HTTP Strict Transport Security) in LineRate
HTTP Strict Transport Securityis a policy between your customer's browsers and your servers to increase security. It forces the browser to always use HTTPS when connecting to your site. The server or proxy needs to set the Strict-Transport-Security header. If the client connects sometime in the future and isn't offered a valid SSL cert, it should show an error to the user. Also, if the client is somehow directed to a plaintext URL at your site, for instance from the address bar, it should turn it into an HTTPS URL before connecting. This policy can prevent simple attacks where an attacker is positioned in the network temporarily between a client and your site. If they can connect to the client plaintext, and the user isn't carefully checking for the green browser lock symbol, they can act as a man in the middle and read all the data flowing between clients and servers. SeeMoxie Marlinspike's presentation. HSTS also saves you if you're worried about some piece of infrastructure accidentally emitting a non-secure link. Modern sites are a hybrid of client-side and server-side code; browser facing content and APIs; core applications and peripheral systems like analytics, support, or advertising. What are the odds that one of these systems will someday use a URL that's not HTTPS? With HSTS, you can prevent attacks that take advantage of this accident. To make HSTS effective in this case, you should place it in a proxyoutsideof all of these systems. First I'll show a simple script to enable HSTS on LineRate; next I'll show an enhancement to detect the plaintext URLs that are leaking on clients that don't obey HSTS. Simple: Add HSTS Header to Responses To enable HSTS for your site, simply catch responses and add the "Strict-Transport-Security" header: var vsm = require('lrs/virtualServerModule'); // Set this to the amount of time a browser should obey HSTS for your site var maxAge = 365*24*3600; // One year, in seconds function setHsts(servReq, servRes, cliReq) { cliReq.on('response', function (cliRes) { cliRes.bindHeaders(servRes); servRes.setHeader('Strict-Transport-Security', 'max-age=' + maxAge); cliRes.fastPipe(servRes); }); cliReq(); } vsm.on('exist', 'yourVirtualServerName', function (vs) { vs.on('request', setHsts); }); For any requests that come through a virtual server named yourVirtualServerName , LineRate will add the header to the response. In this example, the maxAge variable means that the browser should enforce HTTPS for your site for the next year from when it saw this response header. As long as users visit your site at least once a year, their browser will enforce that all URLs are HTTPS. Advanced: Detect plaintext leaks and HSTS issues In a more advanced script, you can also detect requests to URLs that aren't HTTPS. Note that HSTS requires the browser to enforce the policy; some browsers don't support it (Internet Explorer does not as of this writing; Safari didn't until Mavericks). For those users, you'll still need to detect any plaintext "leaks". Or, maybe you're a belt-and-suspenders kind of person, and you want to tell your servers to add HSTS, but also detect a failure to do so in your proxy. In these cases, the script below will detect the problem, collect information, record it, and workaround by redirecting to the HTTPS URL. var vsm = require('lrs/virtualServerModule'); var util = require('util'); // Set this to the domain name to redirect to var yourDomain = 'www.yoursite.com'; // Set this to the amount of time a browser should obey HSTS for your site var maxAge = 365*24*3600; // One year, in seconds var stsValParser = /max-age=([0-9]+);?/; function detectAndFixHsts(servReq, servRes, cliReq) { cliReq.on('response', function (cliRes) { cliRes.bindHeaders(servRes); var stsVal = cliRes.headers['strict-transport-security']; var stsMatch = stsVal ? stsValParser.match(stsVal) : []; if (stsMatch.length !== 1) { // Strict-Transport-Security header not valid. console.log('[WARNING] Strict-Transport-Security header not set ' + 'properly for URL %s. Value: %s. Request Headers: %s' + ', response headers: %s', servReq.url, stsVal, util.inspect(servReq.headers), util.inspect(cliRes.headers)); servRes.setHeader('Strict-Transport-Security', 'max-age=' + maxAge); } cliRes.fastPipe(servRes); }); cliReq(); } function redirectToHttps(servReq, servRes, cliReq) { // This is attached to the non-SSL VIP. var referer = servReq.headers['referer']; if (referer === undefined || (referer.lastIndexOf('http://' + yourDomain, 0) == -1)) { // Referred from another site on the net; not a leak in your site. } else { // Leaked a plaintext URL or user is using a deprecated client console.log('[WARNING] Client requested non-HTTPS URL %s. ' + 'User-Agent: %s, headers: %s', servReq.url, servReq.headers['user-agent'], util.inspect(servReq.headers)); } var httpsUrl = 'https://' + yourDomain + servReq.url; var redirectBody = '<html><head><title> ' + httpsUrl + ' Moved</title>' + '</head><body><p>This page has moved to <a href="' + httpsUrl + '">' + httpsUrl + '</a></p></body></html>'; servRes.writeHead(302, { 'Location': httpsUrl, 'Content-Type' : 'text/html', 'Content-Length' : redirectBody.length }); servRes.end(redirectBody); } vsm.on('exist', 'yourHttpsVirtualServer', function (vs) { vs.on('request', detectAndFixHsts); }); vsm.on('exist', 'yourPlainHttpVirtualServer', function (vs) { vs.on('request', redirectToHttps); }); Note that logging every non-HTTPS request can limit performance and fill up disk. Alternatives include throttled logging (try googling "npm log throttling"), or recording URLs to a database, or keeping a cache of URLs that we've already reported. If you're interested, let me know in the comments and I can cover some of these topics in future blog posts.1.1KViews0likes0CommentsCloud bursting, the hybrid cloud, and why cloud-agnostic load balancers matter
Cloud Bursting and the Hybrid Cloud When researching cloud bursting, there are many directions Google may take you. Perhaps you come across services for airplanes that attempt to turn cloudy wedding days into memorable events. Perhaps you'd rather opt for a service that helps your IT organization avoid rainy days. Enter cloud bursting ... yes, the one involving computers and networks instead of airplanes. Cloud bursting is a term that has been around in the tech realm for quite a few years. It, in essence, is the ability to allocate resources across various public and private clouds as an organization's needs change. These needs could be economic drivers such as Cloud 2 having lower cost than Cloud 1, or perhaps capacity drivers where additional resources are needed during business hours to handle traffic. For intelligent applications, other interesting things are possible with cloud bursting where, for example, demand in a geographical region suddenly needs capacity that is not local to the primary, private cloud. Here, one can spin up resources to locally serve the demand and provide a better user experience.Nathan Pearcesummarizes some of the aspects of cloud bursting inthis minute long video, which is a great resource to remind oneself of some of the nuances of this architecture. While Cloud Bursting is a term that is generally accepted by the industry as an "on-demand capacity burst,"Lori MacVittiepoints out that this architectural solution eventually leads to aHybrid Cloudwhere multiple compute centers are employed to serve demand among both private-based resources are and public-based resources, or clouds, all the time. The primary driver for this: practically speaking,there are limitations around how fast data that is critical to one's application (think databases, for example) can be replicated across the internet to different data centers.Thus, the promises of "on-demand" cloud bursting scenarios may be short lived, eventually leaning in favor of multiple "always-on compute capacity centers"as loads increase for a given application.In any case, it is important to understand thatthat multiple locations, across multiple clouds will ultimately be serving application content in the not-too-distant future. An example hybrid cloud architecture where services are deployed across multiple clouds. The "application stack" remains the same, using LineRate in each cloud to balance the local application, while a BIG-IP Local Traffic Manager balances application requests across all of clouds. Advantages of cloud-agnostic Load Balancing As one might conclude from the Cloud Bursting and Hybrid Cloud discussion above, having multiple clouds running an application creates a need for user requests to be distributed among the resources and for automated systems to be able to control application access and flow. In order to provide the best control over how one's application behaves, it is optimal to use a load balancer to serve requests. No DNS or network routing changes need to be made and clients continue using the application as they always did as resources come online or go offline; many times, too, these load balancers offer advanced functionality alongside the load balancing service that provide additional value to the application. Having a load balancer that operates the same way no matter where it is deployed becomes important when resources are distributed among many locations. Understanding expectations around configuration, management, reporting, and behavior of a system limits issues for application deployments and discrepancies between how one platform behaves versus another. With a load balancer like F5's LineRate product line, anyone can programmatically manage the servers providing an application to users. Leveraging this programatic control, application providers have an easy way spin up and down capacity in any arbitrary cloud, retain a familiar yet powerful feature-set for their load balancer, ultimately redistribute resources for an application, and provide a seamless experience back to the user. No matter where the load balancer deployment is, LineRate can work hand-in-hand with any web service provider, whether considered a cloud or not. Your data, and perhaps more importantly cost-centers, are no longer locked down to one vendor or one location. With the right application logic paired with LineRate Precision's scripting engine, an application can dynamically react to take advantage of market pricing or general capacity needs. Consider the following scenarios where cloud-agnostic load balancer have advantages over vendor-specific ones: Economic Drivers Time-dependent instance pricing Spot instances with much lower cost becoming available at night Example: my startup's billing system can take advantage in better pricing per unit of work in the public cloud at night versus the private datacenter Multiple vendor instance pricing Cloud 2 just dropped their high-memory instance pricing lower than Cloud 1's Example: Useful for your workload during normal business hours; My application's primary workload is migrated to Cloud 2 with a simple config change Competition Having multiple cloud deployments simultaneously increases competition, and thusyour organization's negotiated pricing contracts become more attractiveover time Computational Drivers Traffic Spikes Someone in marketing just tweeted about our new product. All of a sudden, the web servers that traditionally handled all the loads thrown at them just fine are gettingslashdottedby people all around North America placing orders. Instead of having humans react to the load and spin up new instances to handle the load - or even worse: doing nothing - your LineRate system and application worked hand-in-hand to spin up a few instances in Microsoft Azure's Texas location and a few more in Amazon's Virginia region. This helps you distribute requests from geographically diverse locations: your existing datacenter in Oregon, the central US Microsoft Cloud, and the east-coast based Amazon Cloud. Orders continue to pour in without any system downtime, or worse: lost customers. Compute Orchestration A mission-critical application in your organization's private cloud unexpectedly needs extra computer power, but needs to stay internal for compliance reasons. Fortunately, your application can spin up public cloud instances and migrate traffic out of the private datacenter without affecting any users or data integrity. Your LineRate instance reaches out to Amazon to boot instances and migrate important data. More importantly, application developers and system administrators don't even realize the application has migrated since everything behaves exactly the same in the cloud location. Once the cloud systems boot, alerts are made to F5's LTM and LineRate instances that migrate traffic to the new servers, allowing the mission-critical app to compute away. You just saved the day! The benefit to having a cloud-agnostic load balancing solution for connecting users with an organization's applications not only provides a unified user experience, but provides powerful, unified way of controlling the application for its administrators as well. If all of a sudden an application needs to be moved from, say, aprivate datacenter with a 100 Mbps connection to a public cloud with a GigE connection, this can easily be done without having to relearn a new load balancing solution. F5's LineRate product is available for bare-metal deployments on x86 hardware, virtual machine deployments, and has recently deployed anAmazon Machine Image (AMI). All of these deployment types leverage the same familiar, powerful tools that LineRate offers:lightweight and scalable load balancing, modern management through its intuitive GUI or the industry-standard CLI, and automated control via itscomprehensive REST API.LineRate Point Load Balancerprovides hardened, enterprise-grade load balancing and availability services whereasLineRate Precision Load Balanceradds powerful Node.js programmability, enabling developers and DevOps teams to leveragethousands of Node.js modulesto easily create custom controlsfor application network traffic. Learn about some of LineRate'sadvanced scripting and functionalityhere, ortry it out for freeto see if LineRate is the right cloud-agnostic load balancing solution for your organization.900Views0likes0CommentsWorking with Node.js variable type casts, raw binary data
I recently started writing an application in Node.js that dealt with reading in raw data from a file, did some action on it, then send the data over http connection in HTTP body as multipart/binary. Until now I always dealt with text and strings. Through data validation on bit level I've learned the hard way what creating type mismatches does to variables and how it compromises data integrity. Almost all examples I've come across online assume one is dealing with strings, and none when dealing with raw binary data. This article is product on my mishaps, and thorough analysis by colleagues who provided much valuable insight into Node.js rules. Let's start with a simple example, where we define several variables of different type, then use += operator to concatenate string data to these variables. "use strict"; var fs = require('fs'); var mydata = 'somedata'; var tmpvar1; var tmpvar2 = ''; var tmpvar3 = null; var tmpvar4 = []; var tmpvar5 = {}; var tmpvar6 = null; var tmpvar7 = 0; tmpvar1 += mydata; tmpvar2 += mydata; tmpvar3 += mydata; tmpvar4 += mydata; tmpvar5 += mydata; tmpvar6 = mydata; tmpvar7 += mydata; console.log('length of mydata is: ',mydata.length,' , length of tmpvar1 is: ',tmpvar1.length, ' , tmpvar1 contents are: '+tmpvar1); console.log('length of mydata is: ',mydata.length,' , length of tmpvar2 as \'\' is: ',tmpvar2.length, ' , tmpvar2 contents are: '+tmpvar2); console.log('length of mydata is: ',mydata.length,' , length of tmpvar3 as null is: ',tmpvar3.length, ' , tmpvar3 contents are: '+tmpvar3); console.log('length of mydata is: ',mydata.length,' , length of tmpvar4 as \[\] is: ',tmpvar4.length, ' , tmpvar4 contents are: '+tmpvar4); console.log('length of mydata is: ',mydata.length,' , length of tmpvar5 as \{\} is: ',tmpvar5.length, ' , tmpvar5 contents are: '+tmpvar5); console.log('length of mydata is: ',mydata.length,' , length of tmpvar6 as assigned (not appended) is: ',tmpvar6.length, ' , tmpvar6 contents are: '+tmpvar6); console.log('length of mydata is: ',mydata.length,' , length of tmpvar7 as 0 is: ',tmpvar7.length, ' , tmpvar7 contents are: '+tmpvar7); When ran this is the output it produces. Comparing length of variables, contents of the variables, and correlating that to type definition for each tmpvar we can explain what is occuring. Inline are comments on why things are behaving this way. length of mydata is: 8 , length of tmpvar1 is: 17 , tmpvar1 contents are:undefinedsomedata The first concern is normal Javascript behavior (undefined variable += string). Javascript attempts to be smart about type casting into what you are asking the script to do, so in this case add a string to an undefined variable.Undefined value is being converted to a string, and concatenated with mydata.The primitive type undefined will presume a value of string “undefined” when you attempt to add any string to it. In an actual program, it wouldd probably make sense to either 1) initialize the variable to a null string or 2) do a type check for undefined if(typeof tmpvar1 === ‘undefined’) and just assign the variable directly to the first string value { tmpvar1 = mydata; } length of mydata is: 8 , length of tmpvar2 as '' is: 8 , tmpvar2 contents are:somedata length of mydata is: 8 , length of tmpvar3 as null is: 12 , tmpvar3 contents are:nullsomedata length of mydata is: 8 , length of tmpvar4 as [] is: 8 , tmpvar4 contents are:somedata length of mydata is: 8 , length of tmpvar5 as {} is: 23 , tmpvar5 contents are:[object Object]somedata length of mydata is: 8 , length of tmpvar6 as assigned (not appended) is: 8 ,tmpvar6 contents are: somedata Assigning a string causes the resulting variable type to be a string. length of mydata is: 8 , length of tmpvar7 as 0 is: 9 , tmpvar7 contents are: 0somedata 0 is converted to string and concatenated Now our example is changed to read raw binary data from a file: var t1 = 1; var tmpvar21; var tmpvar22 = ''; var tmpvar23 = null; var tmpvar24 = []; var tmpvar25 = {}; var tmpvar26 = null; var tmpvar27 = 0; var fsread = fs.createReadStream("file.sample", { end: false }); // file.samle is any binary file larger then 64KB. fsread.on('error',function(e){ console.log('debug -- got file read error: ',e); }).on('readable', function() { if(t1 == 1) { var chunk = fsread.read(); t1 = 0; } // Reads in a chunk from file, chunk size is default else { var chunk = fsread.read(20); t1 = 1;} //Reads in a chunk from file, chunk size is 20 tmpvar21 += chunk; tmpvar22 += chunk; tmpvar23 += chunk; tmpvar24 += chunk; tmpvar25 += chunk; tmpvar26 = chunk; tmpvar27 += chunk; console.log('length of chunk is: ',chunk.length,' , length of tmpvar21 is: ',tmpvar21.length); console.log('length of chunk is: ',chunk.length,' , length of tmpvar22 as \'\' is: ',tmpvar22.length); console.log('length of chunk is: ',chunk.length,' , length of tmpvar23 as null is: ',tmpvar23.length); console.log('length of chunk is: ',chunk.length,' , length of tmpvar24 as \[\] is: ',tmpvar24.length); console.log('length of chunk is: ',chunk.length,' , length of tmpvar25 as \{\} is: ',tmpvar25.length); console.log('length of chunk is: ',chunk.length,' , length of tmpvar26 as assigned (not appended) is: ',tmpvar26.length); console.log('length of chunk is: ',chunk.length,' , length of tmpvar27 as 0 is: ',tmpvar27.length); if(t1) { process.exit(0); } }).on('end', function() { process.exit(1); }) Output I get running node v0.12 is: length of chunk is: 65536 , length of tmpvar21 is: 65544 length of chunk is: 65536 , length of tmpvar22 as '' is: 65535 Since we have not called fsread.setEncoding(), fs.read() is returning a buffer. Hence this is astring + buffer operation, or interpreted by node as string + buffer.toString().This indicates that the toString() on the buffer returns 65535 characters, from 65536 bytes. Since data read in is raw binary,guess is that we have a non UTF8 character that gets removes when converted to a string. length of chunk is: 65536 , length of tmpvar23 as null is: 65539 length of chunk is: 65536 , length of tmpvar24 as [] is: 65535 length of chunk is: 65536 , length of tmpvar25 as {} is: 65550 length of chunk is: 65536 , length of tmpvar26 as assigned (not appended) is: 65536 length of chunk is: 65536 , length of tmpvar27 as 0 is: 65536 This is number + buffer. Looks like both are converted to strings, the length will be one more then tmpvar22, which it is. length of chunk is: 20 , length of tmpvar21 is: 65564 length of chunk is: 20 , length of tmpvar22 as '' is: 65555 length of chunk is: 20 , length of tmpvar23 as null is: 65559 length of chunk is: 20 , length of tmpvar24 as [] is: 65555 length of chunk is: 20 , length of tmpvar25 as {} is: 65570 length of chunk is: 20 , length of tmpvar26 as assigned (not appended) is: 20 length of chunk is: 20 , length of tmpvar27 as 0 is: 65556 Lesson here is do not mix variables with different type definitions, and if you do ensure you are getting the result you want! So how do we deal with raw data, if there is no raw data variable type. Node.js uses Buffer class for this. If you plan to use a buffer variable type to append data to you need to initialize it with new Buffer(0). Also note that using += operator to append Buffer data containing raw binary data does not work. We need to use Buffer.concat() for this. Here is sample code: var mybuff = new Buffer(0); var fsread = fs.createReadStream("file.sample"); fsread.on('error',function(e){ console.log(‘Error reading file: ‘,e); }).on(‘data’, function(chunk) { mybuff = Buffer.concat([mybuff,chunk]); }).on('end', function() { process.exit(1); }); If you have a large amount of raw data you want to read in, then take action on, suggestion is not to use Buffer.concat() to create one large buffer. Instead, for better performance push the data into an array and iterate through array elements at the end. If at all possible deal with the data on the spot avoiding having to cache it, making your app more dynamic and less dependent on memory resources. Certainly, if you are just reading and writing raw data from streams(filesystem to HTTP, or vice versa), using Node.js stream.pipe() is the way to do it. var myarray = []; var fsread = fs.createReadStream("file.sample"); fsread.on('error',function(e){ console.log(‘Error reading file: ‘,e); }).on(‘data’, function(chunk) { myarray.push(chunk); }).on('end', function() { process.exit(1); });836Views0likes0CommentsSnippet #7: OWASP Useful HTTP Headers
If you develop and deploy web applications then security is on your mind. When I want to understand a web security topic I go to OWASP.org, a community dedicated to enabling the world to create trustworthy web applications. One of my favorite OWASP wiki pages is the list of useful HTTP headers. This page lists a few HTTP headers which, when added to the HTTP responses of an app, enhances its security practically for free. Let’s examine the list… These headers can be added without concern that they affect application behavior: X-XSS-Protection Forces the enabling of cross-site scripting protection in the browser (useful when the protection may have been disabled) X-Content-Type-Options Prevents browsers from treating a response differently than the Content-Type header indicates These headers may need some consideration before implementing: Public-Key-Pins Helps avoid *-in-the-middle attacks using forged certificates Strict-Transport-Security Enforces the used of HTTPS in your application, covered in some depth by Andrew Jenkins X-Frame-Options / Frame-Options Used to avoid "clickjacking", but can break an application; usually you want this Content-Security-Policy / X-Content-Security-Policy / X-Webkit-CSP Provides a policy for how the browser renders an app, aimed at avoiding XSS Content-Security-Policy-Report-Only Similar to CSP above, but only reports, no enforcement Here is a script that incorporates three of the above headers, which are generally safe to add to any application: And that's it: About 20 lines of code to add 100 more bytes to the total HTTP response, and enhanced enhanced application security! Go get your own FREE license and try it today!729Views0likes2CommentsCase Study: LineRate's C++98 to C++11 Migration
Have you heard about C++11 and the huge changes in the language? Are you terrified to turn on the flag in your compiler? Are your developers clamouring for modern language features? The LineRateteam has been excited about the new C++11 features for a few years: lambdas, range-based for loops, move semantics, 'auto' keyword, etc... However, we haven't upgraded. We needed to upgrade Boost and GCC to C++11 compatible versions, along with a few other libraries. Now that we have upgraded, we don't know what level of effort and amount of time are required to do the upgrade. With such a large language change there are many oppotunities for things to go wrong and maybe we will need to make extensive code changes. LineRate has extensive regression testing, so we are not overly concerned about introducing new bugs. Our regression test suite runs nightly and has good code coverage. Let's dive in... Build System Changes For better or worse, our build system is SCons. It is simple to add the compiler flag to enable C++11 building, just add '-std=c++11' to the CXXFLAGS and off you go. Since this is going to be an experiment for the time being, we'll make it optional. Easy enough, add a command-line switch to SCons to determine if this flag should be added or not. The same can be done with Make by adding a new target. Our changes are something like this: cpp11 = int(ARGUMENTS.get('cpp11', 0)) cpp11Flags = [] if (1 == cpp11): cpp11Flags.append('-std=c++11'); print "C++11 Mode Enabled" env.AppendUnique(CXXFLAGS = cpp11Flags) This works. We can see in the SCons jobs that it is passing '-std=c++11' to gcc: g++48 -o some_code.o -c -O2 -Wall -Werror -std=c++11 -I/some/includes some_code.cc Now come the build failures! Time to port the code to the new standard. Product and Unit Test Code Changes Our codebase is made up of numerous languages. The C++ portion of our codebase is approximately 200k SLOC according to sloccount, 135k SLOC of product code and 65k SLOC of unit tests. Typically the product code is highly scrutinized for adhering to our coding style guidelines while rules are a little looser in the unit tests. This resulted in many source changes in the unit tests attributed to one thing (code change #1 below). Brief aside on the topic of coding style... We use Google's coding style guide with a few small modifications. These have been well considered and help avoid some of the common pitfalls in C++. Where the C++98 standard was lacking, we often use Boost. These things include heavy use of boost::shared_ptr<T>, boost::bind(), and boost::unordered_map<K,V>. The code changes needed were minimal. I attribute this to our extensive use of Boost. Their libraries handle the difference in compiler capabilities automatically. All of the changes fell into 6 categories: (ERROR) boost::shared_ptr<T>:: operator== operator bool() is now explicit (ERROR) std::make_pair<T,U> args must be rvalue references (WARNING)std::auto_ptr<T> is deprecatedin favor of std::unique_ptr<T> (ERROR)std::unique_ptr<T> does not support copy assignment (ERROR)nullptr keyword is requiredwhen initializing with 0 (ERROR)constexpr class variablesinstead of const Let's explore each one of these independently. 1. boost::shared_ptr<T>::operator bool() is now explicit Boost changed the shared_ptr<T>'s boolean operator. Conversion to bool is now explicit in C++11. This means that old valid code now becomes a compilation error, so it is easy to find and fix: boost::shared_ptr<T> myT; if (myT) { doSomething(); } // error! if (myT != nullptr) { doSomething(); } // OK if (!myT.empty()) { doSomething(); } // OK bool MyClass::isNotNull() const { return myT_; } // error! bool MyClass::isNotNull() const { return myT_ != nullptr; } // OK bool MyClass::isNotNull() const { return !myT_.empty(); } // OK 2. std::make_pair<T,U> args must be rvalue references In C++98, std::make_pair<T,U> takes references and then copies from them into the new std::pair<T,U>. This function has been removed and rvalue references must be provided. The rvalue refs end up getting moved from, so be careful! Depending on the compiler, your options, and your STL implementation, this one might work: std::pair<int, int> pairMaker(int v1, int v2) { return std::make_pair(v1, v2); // error! } The following will work: std::pair<int, int> pairMaker(int v1, int v2) { int a{v1}, b{v2}; return std::make_pair(a, b); // OK } std::pair<int, int> pairMaker(int v1, int v2) { return std::make_pair(std::move(v1), std::move(v2)); // OK } 3. std::auto_ptr<T> is deprecated in favor of std::unique_ptr<T> There were many evils with std::auto_ptr<T>, but it was the only available smart pointer which could release. In the modern day, std::unique_ptr<T> can also release. A simple global search and replace will do the trick. Except for the code change #4. See below. This one does not pertain to you if you are not compiling with -Werror flag enabled. But, if you are not then perhaps you should. There are plenty of arguments for it. 4. std::unique_ptr<T> does not support copy assignment In the bad old days of std::auto_ptr<T> there existed an assignment operator to transfer ownership from one auto_ptr to another. This is one of the many evils of std::auto_ptr<T>. I performed a complete search and replace of std::auto_ptr with std::unique_ptr in our codebase because of #3 above. A few places were relying on the assignment behavior, but the compiler immediately flagged it as an error. The fix is easy. std::auto_ptr<int> p(new int); // deprecated! std::auto_ptr<int> q = p; // deprecated! std::unique_ptr<int> up(new int); std::unique_ptr<int> q; q = p; // error! q.reset(p.release()); // OK 5. nullptr keyword is required when initializing with 0 The new nullptr keyword has a different type than NULL or 0. Its type is std::nullptr_t. It is preferable because it provides stronger type checking by the compiler. Fortunately, it is not an error since this is used all over our codebase. In some cases it does provide a warning. Fixing those are simple: class MyClass { public: MyClass() : myPtr_{0} {} // error! MyClass() : myPtr_{nullptr} {} // OK private: std::unique_ptr<int> myPtr_; }; 6. constexpr class variables instead of const Variables declared inside a class as const need to be constexpr if they are non-integral types. GCC's in-class initializer of static members extension is now part of the standard, but as constexpr. The compile will catch these nicely for you. class MyClass { static const double foo = 0.0; // error! static constexpr double foo = 0.0; // OK }; Other The only other issue was found in the unit tests. The test counts the number of copies and the number of destructions are called. It expected 3, but only 2 happened. There was a failure due to the number of copies. MOVE! One of the copies became a move. Getting moves is a reason we are migrating to C++11. It is great to know that this actually happened without any other changes. Staged Migration Strategy Now that we established that the migration is straight-forward and works (except for unknown regression bugs), we still need to address the issue of making the code build with or without the conditional flag being passed to the compiler by the build system. This software is for a software-based network appliance and we need to run tests extensively. It would be great to put these changes in, but for now we need to only enable this conditionally. Some of the changes such as usage of the 'nullptr' keyword are not available for C++98. Boost provides some macros for just this situation. Macro Magic and Static Asserts Macros are not very C++ish, but it is what we have. An extensive list of macros are available from Boost which will use information such as the version and vendor of the compiler and STL to determine what C++11 (and other) features are available. To get around the 'const vs constexpr' problem listed about in #6, use the BOOST_CONSTEXPR_OR_CONST macro. For some of the others you may need to write some of your own. Here is the set that I am using: #ifdef BOOST_NO_CXX11_NULLPTR #define LRS_NULLPTR NULL #else #define LRS_NULLPTR nullptr #endif #ifdef BOOST_NO_CXX11_RVALUE_REFERENCES #define LRS_MOVE #else #define LRS_MOVE std::move #endif #ifdef BOOST_NO_CXX11_SMART_PTR #define LRS_UNIQUE_PTR std::auto_ptr #else #define LRS_UNIQUE_PTR std::unique_ptr #endif Another thing to consider is adding static asserts. These are executed at compile-time and do not change the run-time behavior of the code. It is non-trivial to reason about the copy/move default functions being generated by the compiler in C++11. The developer needs to take into account the copy/move semantics of not only the class being written, but also all of the member variables within the class. Always defining these 5 static asserts below will help. When a code change happens which changes one the compilation will fail and produce an easy to read compilation error. This forces the author to evaluate and either change the class or change the static assertion to align with the new behavior. Don't forget to wrap it in Boost's macro! #ifdef BOOST_HAS_STATIC_ASSERT static_assert(std::is_default_constructible<MyClass>::value, "MyClass is not default constructible."); static_assert(std::is_copy_constructible<MyClass>::value, "MyClass is not copy constructible."); static_assert(std::is_copy_assignable<MyClass>::value, "MyClass is not copy assignable."); static_assert(std::is_nothrow_move_constructible<MyClass>::value, "MyClass is not no-throw move constructible."); static_assert(std::is_nothrow_move_assignable<MyClass>::value, "MyClass is not no-throw move assignable."); #endif Summary The migration to C++ has been a breeze. The use of Boost and Boost's macros were a large benefit. Minimal code changes were required and the compiler was able to point them out. Regression analysis has yet to be run, but hopefully there are no unexpected results.While we're validating the stability and performance ofthe C++11 version of our software, we can keep developing new features inC++98.After these changes, switching back and forth between C++98 andC++11 is as easy as a SCons option. Total SLOC changed of the roughly 200k of C++: .tg {border-collapse:collapse;border-spacing:0;border-color:#aaa;border-width:1px;border-style:solid;} .tg td{font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:0px;overflow:hidden;word-break:normal;border-color:#aaa;color:#333;background-color:#fff;} .tg th{font-family:Arial, sans-serif;font-size:14px;font-weight:normal;padding:10px 5px;border-style:solid;border-width:0px;overflow:hidden;word-break:normal;border-color:#aaa;color:#fff;background-color:#f38630;} .tg .tg-s6z2{text-align:center} .tg .tg-lyaj{background-color:#FCFBE3;text-align:center} .tg .tg-z2zr{background-color:#FCFBE3} Code Area Insertions Deletions Product 224 142 Tests 234 191 Build System 47 9 Good luck uplifting your codebase. C++14 should be even easier. Thank you to Jon Kalb (@_JonKalb) for corrections on #1 and #6.714Views0likes0CommentsHow Microservices are breaking (up) the Network
It's not just apps that are decomposing, it's the network, too. The network is bifurcating. On the one side is rock solid reliability founded on the premise of permanence and infrequent change. On the other is flexibility borne of transience, immutability and rapid change. There are, by virtue of this change in application architecture from the monolithic to the microservice model, changes necessary in the network. Specifically to the means by which the network delivers the application services upon which apps rely to be themselves delivered in a way that meets the performance, security and availability expectations of business and consumers alike. Jeff Sussna explains in his post, "Microservices, have you met ...DevOps?" Numerous commentators have remarked that microservices trade code complexity for operational complexity. Gary Oliffe referred to it as “services with the guts on the outside”. It is true that microservices potentially confront operations with an explosion of moving parts and interdependencies. The key to keeping ops from being buried under this new-found complexity is understanding that microservices represent a new organizational model as much as a new architectural model. The organizational transformation needed to make microservices manageable can’t be restricted to development; instead, it needs to happen at the level of DevOps. Microservices work by reducing the scope of concern. Developers have to worry about fewer lines of code, fewer features, and fewer interfaces. They can deliver functional value more quickly and often, with less fear of breaking things, and rely on higher-order emergent processes to incorporate their work into a coherent global system. This new architectural model extends into the network and continues to draw deeper the line between the core, shared network and the application network that began with the explosion of virtualization. This new per-app infrastructure model relies heavily on software and virtual solutions; no hardware here can keep up with the rate of change. Solutions which are not API-enabled and easily orchestrated are to be eschewed in favor of those that do. Immutable or disposable, this per-app service infrastructure must support hundreds or thousands of applications and do so at a cost that won't break the budget (or the bank). Services which have a greater natural affinity to the application - load balancing, application security, performance - are migrating closer to the app not only in topology but in form factor, with virtual machines and software preferred. This is partially due to the affinity that model shares with cloud, and begins to paint a better picture of what a truly hybrid architecture inclusive of cloud may look like in the future. Virtual and software-based per-app service infrastructure fits more naturally in a cloud than does its hardware-tethered counterparts, and provides the ability to more specifically tune and tweak services to the application to ensure the highest levels of availability, security and performance. This is, no doubt, why we see such a significant emphasis on programmability of application services as it relates to those who view DevOps as strategic. Every manual change required to a service configuration is pure overhead that can be eliminated; an obstacle removed in the path to speedier deployment. The ability to programmatically modify such services inclusive of treating that infrastructure as code through templates and configuration artifact versioning is imperative to maintaining stability of the entire infrastructure in the face of frequent change. DevOps responsibility will no doubt expand to include the infrastructure delivering per-app services. A DZone report backs this up, showing increasing expansion and desire to expand beyond the app. What this means is more careful attention paid to that infrastructure, especially in terms of how supportive it is of integration with other toolsets, capabilities to fit within a model that treats infrastructure as code (templates and APIs), as well as its licensing model, which must support an economy of scale that matches well with cloud and similarly hyper-scaled environments, such as those implementing microservice-based architectures.701Views0likes0CommentsConfiguring the KVM hypervisor to run a LineRate guest
The release of LineRate 2.5.0 brings lots of new features, but the one I'm most excited about is KVM guest support. We've spent a lot of effort making LineRate run well as a guest under the KVM hypervisor "out of the box". We've also identified a couple of configuration tweaks you can do on the hypervisor host that improve performance and efficiency. These tweaks are: Enable multiple queues on the Virtio network interface Pin the LineRate guest to vCPUs that are not used by the hypervisor host Enable Virtio NIC multiqueue LineRate supports multiple send and receive queues on Virtio network interfaces. This feature allows multiple vCPUs to be sending and receiving traffic simultaneously, improving network throughput. Your hypervisor host needs to run KVM 2.0.0 or later and libvirt 1.2.2 or later to use multiqueue feature of Virtio NICs. The optimal number of queues depends on the number of vCPUsin the LineRate guest, as shown below: .tg {border-collapse:collapse;border-spacing:0;} .tg td{font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;} .tg th{font-family:Arial, sans-serif;font-size:14px;font-weight:normal;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;} .tg .tg-5y5n{background-color:#ecf4ff} Guest vCPUs Queues 1 1 2 1 4 1 6 2 8 2 12 4 16 6 24 8 32 19 Note: The default number of queues for a KVM guest is 1. If the table above indicates that the optimal number of queues for your LineRate guest is 1, then there is no need to do anything. To enable Virtio multiqueue support After creating the LineRate guest with Virtio NICs, shut down the guest. Manually edit the guest XML, for example using the virsh edit command. In every <interface> section, add the following element using the table above to determine the queues value: <driver name='vhost' queues='8'/> Save the file. Restart the guest. Pin guest vCPUs to host vCPUs Virtual machines share vCPUs with the hypervisor host. In many situations, you can improve LineRate performance by coordinating which vCPUs are used by the host and the guest. You want to pin guest vCPUs to hypervisor vCPUs that are not used by the host's network drivers. To implement vCPU pinning Run some traffic through the LineRate guest and determine which physical NIC on the hypervisor is carrying the LineRate guest's traffic. Look in the following files on the hypervisor host to determine which vCPUs are being used for the LineRate guest's network traffic: /proc/interrupts: Shows which interrupts are serviced by which host vCPUs. Look for the interrupts coming from the physical NIC(s) identified in step 1 above and identify the hypervisor host vCPUs that handle those interrupts. /sys/class/net/$DEV/device/local_cpulist (where $DEV is the name of the physical NIC identified in step 1 above): Shows which host vCPUs are connected to the physical NIC. Use the virsh capabilities command to see all of the host vCPUs. Identify vCPUs which are not used by the physical NIC, these are the proper vCPUs to pin the LineRate guest to. Use virsh vcpupin (or virt-manager or virsh edit ) to pin guest vCPUs to unused host vCPUs. If you choose to manually edit the XML file using virsh edit , it should look something like what's shown below. In the libvirt XML file, vcpu specifies the guest vCPU and cpuset specifies the hypervisor host vCPU. <vcpu placement='static'>16</vcpu> <cputune> <vcpupin vcpu='0' cpuset='8'/> <vcpupin vcpu='1' cpuset='9'/> <vcpupin vcpu='2' cpuset='10'/> <vcpupin vcpu='3' cpuset='11'/> <vcpupin vcpu='4' cpuset='12'/> <vcpupin vcpu='5' cpuset='13'/> <vcpupin vcpu='6' cpuset='14'/> <vcpupin vcpu='7' cpuset='15'/> <vcpupin vcpu='8' cpuset='24'/> <vcpupin vcpu='9' cpuset='25'/> <vcpupin vcpu='10' cpuset='26'/> <vcpupin vcpu='11' cpuset='27'/> <vcpupin vcpu='12' cpuset='28'/> <vcpupin vcpu='13' cpuset='29'/> <vcpupin vcpu='14' cpuset='30'/> <vcpupin vcpu='15' cpuset='31'/> </cputune> Example vCPU pinning configuration for a LineRate guest in KVM699Views0likes0Comments