metrics
11 TopicssysHttpStatRespBucket1k SNMP metrics meaning
Hi, I would like to get information about few of the exposed SNMP metrics which description is very unclear... sysHttpStatRespBucket1k with oid 1.3.6.1.4.1.3375.2.1.1.2.4.17 sysHttpStatRespBucket4k with oid 1.3.6.1.4.1.3375.2.1.1.2.4.18 If we take sysHttpStatRespBucket1k, I found following description "The number of responses under 1k." but are we talking of a HTTP response size or a duration? Thanks for light anyone could provide on this topic.Solved38Views0likes1CommentOIDs for virtuals servers stats
hi everybody, I want gets some stats by SNMP like the stats display on LTM BIG-IP GUI (Statistics››Module Statistics : Local Traffic >> Statistics Type = virtual servers) What is the OIDs for each stats virtual server name virtual server ip virtual server bits (in/out), packets (in/out) virtual server current connection thanks a lot866Views0likes1CommentF5 Monitoring
Hi, I put together technologies and prepare a full fledge monitoring system for F5 Device overall and LTM module. Can be extend with other modules such as asm, gtm etc. There is a link for presentation regarding all details. Please check if interested can contact with me for details. Here Presentation. Compare tmm cpu cycles with vserver and irules Interface / Vlan PPS/BWD values corelation with vservers showing top usage on what. Showing http compression values as active bandwidth corelation with compression Irule cpu cycle checks and corelate which i rule uses most and what effects after changing irule to operate differently. Saving / Usage on Irules. Note: Much more use cases like this can be added based upon needs. Thanks345Views0likes3CommentsGTM: avoiding flapping DNS answers with RTT method
I am in the need to understand how GTM metrics work for GTM LDNS probes. 1) how they can be logged the decision? I am using 11.2 but moving fast to 11.4.1.. :) 2) Lets make an example. if our GTM chooses a VIP in USA for 100 consecutive times because the RTT is lower going to this USA VIP and then for 1 time it gets a better value - for whatever reason - to go to another VIP, for example to Australia, will it be considered valid the last value which differs from the original 100 previous time? is there cache variation value that can be configured to avoid this flapping choices? (We had this choice in Cisco GSS). 3) how long is the non-optimizes Australian value kept in cache until a new value is reconsidered? It is the Inactive timeout of 28 days?Solved718Views0likes8CommentsArchitecting Scalable Infrastructures: CPS versus DPS
#webperf As we continue to find new ways to make connections more efficient, capacity planning must look to other metrics to ensure scalability without compromising performance. Infrastructure metrics have always been focused on speeds and feeds. Throughput, packets per second, connections per second, etc… These metrics have been used to evaluate and compare network infrastructure for years, ultimately being used as a critical component in data center design. This makes sense. After all, it's not rocket science to figure out that a firewall capable of handling 10,000 connections per second (CPS) will overwhelm a next hop (load balancer, A/V scanner, etc… ) device only capable of 5,000 CPS. Or will it? The problem with old skool performance metrics is they focus on ingress, not egress capacity. With SDN pushing a new focus on both northbound and southbound capabilities, it makes sense to revisit the metrics upon which we evaluate infrastructure and design data centers. CONNECTIONS versus DECISIONS As we've progressed from focusing on packets to sessions, from IP addresses to users, from servers to applications, we've necessarily seen an evolution in the intelligence of network components. It's not just application delivery that's gotten smarter, it's everything. Security, access control, bandwidth management, even routing (think NAC), has become much more intelligent. But that intelligence comes at a price: processing. That processing turns into latency as each device takes a certain amount of time to inspect, evaluate and ultimate decide what to do with the data. And therein lies the key to our conundrum: it makes a decision. That decision might be routing based or security based or even logging based. What the decision is is not as important as the fact that it must be made. SDN necessarily brings this key differentiator between legacy and next-generation infrastructure to the fore, as it's just software-defined but software-deciding networking. When a switch doesn't know what to do with a packet in SDN it asks the controller, which evaluates and makes a decision. The capacity of SDN – and of any modern infrastructure – is at least partially determined by how fast it can make decisions. Examples of decisions: URI-based routing (load balancers, application delivery controllers) Virus-scanning SPAM scanning Traffic anomaly scanning (IPS/IDS) SQLi / XSS inspection (web application firewalls) SYN flood protection (firewalls) BYOD policy enforcement (access control systems) Content scrubbing (web application firewalls) The DPS capacity of a system is not the same as its connection capacity, which is merely the measure of how many new connections a second can be established (and in many cases how many connections can be simultaneously sustained). Such a measure is merely determining how optimized the networking stack of any given solution might be, as connections – whether TCP or UDP or SMTP – are protocol oriented and it is the networking stack that determines how well connections are managed. The CPS rate of any given device tells us nothing about how well it will actually perform its appointed tasks. That's what the Decisions Per Second (DPS) metric tells us. CONSIDERING BOTH CPS and DPS Reality is that most systems will have a higher CPS compared to its DPS. That's not necessarily bad, as evaluating data as it flows through a device requires processing, and processing necessarily takes time. Using both CPS and DPS merely recognizes this truth and forces it to the fore, where it can be used to better design the network. A combined metric helps design the network by offering insight into the real capacity of a given device, rather than a marketing capacity. When we look only at CPS, for example, we might feel perfectly comfortable with a topological design with a flow of similar CPS capacities. But what we really want is to make sure that DPS –> CPS (and vice-versa) capabilities were matched up correctly, lest we introduce more latency than is necessary into a given flow. What we don't want is to end up with is a device with a high DPS rate feeding into a device with a lower CPS rate. We also don't want to design a flow in which DPS rates successively decline. Doing so means we're adding more and more latency into the equation. The DPS rate is a much better indicator of capacity than CPS for designing high-performance networks because it is a realistic measure of performance, and yet a high DPS coupled with a low CPS would be disastrous. Luckily, it is almost always the case that a mismatch in CPS and DPS will favor CPS, with DPS being the lower of the two metrics in almost all cases. What we want to see is as close a CPS:DPS ratio as possible. The ideal is 1:1, of course, but given the nature of inspecting data it is unrealistic to expect such a tight ratio. Still, if the ratio becomes too high, it indicates a potential bottleneck in the network that must be addressed. For example, assume an extreme case of a CPS:DPS of 2:1. The device can establish 10,000 CPS, but only process at a rate of 5,000 DPS, leading to increasing latency or other undesirable performance issues as connections queue up waiting to be processed. Obviously there's more at play than just new CPS and DPS (concurrent connection capability is also a factor) but the new CPS and DPS relationship is a good general indicator of potential issues. Knowing the DPS of a device enables architects to properly scale out the infrastructure to remediate potential bottlenecks. This is particularly true when TCP multiplexing is in play, because it necessarily reduces CPS to the target systems but in no way impacts the DPS. On the ingress, too, are emerging protocols like SPDY that make more efficient use of TCP connections, making CPS an unreliable measure of capacity, especially if DPS is significantly lower than the CPS rating of the system. Relying upon CPS alone – particularly when using TCP connection management technologies - as a means to achieve scalability can negatively impact performance. Testing systems to understand their DPS rate is paramount to designing a scalable infrastructure with consistent performance. The Need for (HTML5) Speed SPDY versus HTML5 WebSockets Y U No Support SPDY Yet? Curing the Cloud Performance Arrhythmia F5 Friday: Performance, Throughput and DPS Data Center Feng Shui: Architecting for Predictable Performance On Cloud, Integration and Performance779Views0likes0CommentsWILS: SSL TPS versus HTTP TPS over SSL
The difference between these two performance metrics is significant so be sure you know which one you’re measuring, and which one you wanted to be measuring. It may be the case that you’ve decided that SSL is, in fact, a good idea for securing data in transit. Excellent. Now you’re trying to figure out how to implement support and you’re testing solutions or perhaps trying to peruse reports someone else generated from testing. Excellent. I’m a huge testing fan and it really is one of the best ways to size a solution specifically for your environment. Some of the terminology used to describe specific performance metrics in application delivery, however, can be misleading. The difference between SSL TPS (Transactions per second) and HTTP TPS over SSL, for example, are significant and therefore should not be used interchangeably when comparing performance and capacity of any solution – that goes for software, hardware, or some yet-to-be-defined combination thereof. The reasons why interpreting claims of SSL TPS are so difficult is due to the ambiguity that comes from SSL itself. SSL “transactions” are, by general industry agreement (unenforceable, of course) a single transaction that is “wrapped” in an SSL session. Generally speaking one SSL transaction is considered: 1. Session establishment (authentication, key exchange) 2. Exchange of data over SSL, often a 1KB file over HTTP 3. Session closure Seems logical, but technically speaking a single SSL transaction could be interpreted as any single transaction conducted over an SSL encrypted session because the very act of transmitting data over the SSL session necessarily requires SSL-related operations. SSL session establishment requires a handshake and an exchange of keys, and the transfer of data within such a session requires the invocation of encryption and decryption operations (often referred to as bulk encryption). Therefore it is technically accurate for SSL capacity/performance metrics to use the term “SSL TPS” and be referring to two completely different things. This means it is important that whomever is interested in such data must do a little research to determine exactly what is meant by SSL TPS when presented with such data. Based on the definition the actual results mean different things. When used to refer to HTTP TPS over SSL the constraint is actually on the bulk encryption rate (related more to response time, latency, and throughput measurements), while SSL TPS measures the number of SSL sessions that can be created per second and is more related to capacity than response time metrics. It can be difficult to determine which method was utilized, but if you see the term “SSL ID re-use” anywhere, you can be relatively certain the test results refer to HTTP TPS over SSL rather than SSL TPS. When SSL session IDs are reused, the handshaking and key exchange steps are skipped, which reduces the number of computationally expensive RSA operations that must be performed and artificially increases the results. As always, if you aren’t sure what a performance metric really means, ask. If you don’t get a straight answer, ask again, or take advantage of all that great social networking you’re doing and find someone you trust to help you determine what was really tested. Basing architectural decisions on misleading or misunderstood data can cause grief and be expensive later when you have to purchase additional licenses or solutions to bring your capacity up to what was originally expected. WILS: Write It Like Seth. Seth Godin always gets his point across with brevity and wit. WILS is an ATTEMPT TO BE concise about application delivery TOPICS AND just get straight to the point. NO DILLY DALLYING AROUND. The Anatomy of an SSL Handshake When Did Specialized Hardware Become a Dirty Word? WILS: Virtual Server versus Virtual IP Address Following Google’s Lead on Security? Don’t Forget to Encrypt Cookies WILS: What Does It Mean to Align IT with the Business WILS: Three Ways To Better Utilize Resources In Any Data Center WILS: Why Does Load Balancing Improve Application Performance? WILS: Application Acceleration versus Optimization All WILS Topics on DevCentral What is server offload and why do I need it?1.3KViews0likes3CommentsF5 Friday: Performance, Throughput and DPS
No, not World of Warcraft “Damage per Second” - infrastructure “Decisions per second”. Metrics are tricky. Period. Comparing metrics is even trickier. The purpose of performance metrics is, of course, to measure performance. But like most tests, before you can administer such a test you really need to know what it is you’re testing. Saying “performance” isn’t enough and never has been, as the term has a wide variety of meanings that are highly dependent on a number of factors. The problem with measuring infrastructure performance today – and this will continue to be a major obstacle in metrics-based comparisons of cloud computing infrastructure services – is that we’re still relying on fairly simple measurements as a means to determine performance. We still focus on speeds and feeds, on wires and protocols processing. We look at throughput, packets per second (PPS) and connections per second (CPS) for network and transport layer protocols. While these are generally accurate for what they’re measuring, we start running into real problems when we evaluate the performance of any component – infrastructure or application – in which processing, i.e. decision making, must occur. Consider the difference in performance metrics between a simple HTTP request / response in which the request is nothing more than a GET request paired with a 0-byte payload response and an HTTP POST request filled with data that requires processing not only on the application server, but on the database, and the serialization of a JSON response. The metrics that describe the performance of these two requests will almost certainly show that the former has a higher capacity and faster response time than the latter. Obviously those who wish to portray a high-performance solution are going to leverage the former test, knowing full well that those metrics are “best case” and will almost never be seen in a real environment because a real environment must perform processing, as per the latter test. Suggestions that a standardized testing environment, similar to application performance comparisons using the Pet Shop Application, are generally met with a frown because using a standardized application to induce real processing delays doesn’t actually test the infrastructure component’s processing capabilities, it merely adds latency on the back-end and stresses capacity of the infrastructure component. Too, such a yardstick would fail to really test what’s important – the speed and capacity of an infrastructure component to perform processing itself, to make decisions and apply them on the component – whether it be security or application routing or transformational in nature. It’s an accepted fact that processing of any kind, at any point along the application delivery service chain induces latency which impacts capacity. Performance numbers used in comparisons should reveal the capacity of a system including that processing impact. Complicating the matter is the fact that since there are no accepted standards for performance measurement, different vendors can use the same term to discuss metrics measured in totally different ways. THROUGHPUT versus PERFORMANCE Infrastructure components, especially those that operate at the higher layers of the networking stack, make decisions all the time. A firewall service may make a fairly simple decision: is this request for this port on this IP address allowed or denied at this time? An identity and access management solution must make similar decisions, taking into account other factors, answering the question is this user coming from this location on this device allowed to access this resource at this time? Application delivery controllers, a.k.a. load balancers, must also make decisions: which instance has the appropriate resources to respond to this user and this particular request within specified performance parameters at this time? We’re not just passing packets anymore, and therefore performance tests that measure only the surface ability to pass packets or open and close connections is simply not enough. Infrastructure today is making decisions and because those decisions often require interception, inspecting and processing of application data – not just individual packets – it becomes more important to compare solutions from the perspective of decisions per second rather than surface-layer protocol per second measurements. Decision-based performance metrics are a more accurate gauge as to how the solution will perform in a “real” environment, to be sure, as it’s portraying the component’s ability to do what it was intended to do: make decisions and perform processing on data. Layer 4 or HTTP throughput metrics seldom come close to representing the performance impact that normal processing will have on a system, and, while important, should only be used with caution when considering performance. Consider the metrics presented by Zeus Technologies in a recent performance test (Zeus Traffic Manager - VMware vSphere 4 Performance on Cisco UCS – 2010 and F5’s performance results from 2010 (F5 2010 Performance Report) While showing impressive throughput in both cases, it also shows the performance impact that occurs when additional processing – decisions – are added into the mix. The ability of any infrastructure component to pass packets or manage connections (TCP capacity) is all well and good, but these metrics are always negatively impacted once the component begins actually doing something, i.e. making decisions. Being able to handle almost 20 Gbps throughput is great but if that measurement wasn’t taken while decisions were being made at the same time, your mileage is not just likely to vary – it will vary wildly. Throughput is important, don’t get me wrong. It’s part of – or should be part of – the equation used to determine what solution will best fit the business and operational needs of the organization. But it’s only part of the equation, and probably a minor part of that decision at that. Decision based metrics should also be one of the primary means of evaluating the performance of an infrastructure component today. “High performance” cannot be measured effectively based on merely passing packets or making connections – high performance means being able to push packets, manage connections and make decisions, all at the same time. This is increasingly a fact of data center life as infrastructure components continue to become more “intelligent”, as they become a first class citizen in the enterprise infrastructure architecture and are more integrated and relied upon to assist in providing the services required to support today’s highly motile data center models. Evaluating a simple load balancing service based on its ability to move HTTP packets from one interface to the other with no inspection or processing is nice, but if you’re ultimately planning on using it to support persistence-based routing, a.k.a. sticky sessions, then the rate at which the service executes the decisions necessary to support that service should be as important – if not more – to your decision making processes. DECISIONS per SECOND There are very few pieces of infrastructure on which decisions are not made on a daily basis. Even the use of VLANs requires inspection and decision-making to occur on the simplest of switches. Identity and access management solutions must evaluate a broad spectrum of data in order to make a simple “deny” or “allow” decision and application delivery services make a variety of decisions across the security, acceleration and optimization demesne for every request they process. And because every solution is architected differently and comprised of different components internally, the speed and accuracy with which such decisions are made are variable and will certainly impact the ability of an architecture to meet or exceed business and operational service-level expectations. If you’re not testing that aspect of the delivery chain before you make a decision, you’re likely to either be pleasantly surprised or hopelessly disappointed in the decision making performance of those solutions. It’s time to start talking about decisions per second and performance of infrastructure in the context it’s actually used in data center architectures rather than as stand-alone, packet-processing, connection-oriented devices. And as we do, we need to remember that every network is different, carrying different amounts of traffic from different applications. That means any published performance numbers are simply guidelines and will not accurately represent the performance experienced in an actual implementation. However, the published numbers can be valuable tools in comparing products… as long as they are based on the same or very similar testing methodology. Before using any numbers from any vendor, understand how those numbers were generated and what they really mean, how much additional processing do they include (if any). When looking at published performance measurements for a device that will be making decisions and processing traffic, make sure you are using metrics based on performing that processing. 1024 Words: Ch-ch-chain of Fools On Cloud, Integration and Performance As Client-Server Style Applications Resurface Performance Metrics Must Include the API F5 Friday: Speeds, Feeds and Boats Data Center Feng Shui: Architecting for Predictable Performance Operational Risk Comprises More Than Just Security Challenging the Firewall Data Center Dogma Dispelling the New SSL Myth502Views0likes0CommentsF5 Friday: Anti-Fail
I recently expounded on my disappointment with cloud computing services that fail to recognize that server metrics are not necessarily enough to properly auto-scale applications in “I Find Your Lack of Win Disturbing”. One of the (very few) frustrating things about working for F5 is that we’re doing so much in so many different areas of application delivery that sometimes I’m not aware that we have a solution to something that’s a broader problem until I say “I wish …” (I guess in a way that’s kind of cool in and of itself, right?) Such is apparently the case with auto-scaling and application metrics. I know we integrate with IIS and Apache and Oracle and a host of other web and application servers to collect very detailed and specific application metrics, but what I didn’t know was how well integrated we’ve gotten these with our management solution. Shortly after posting I got an e-mail from Joel Hendrickson, one of our senior software engineers, who pointed out that “all of the ingredients in ‘Grandma’s Auto-Scaling Recipe’ and much more are available when using the F5MP [F5 Management Pack].” Joel says, “I think you’re essentially saying that hardware-derived metrics are too simplistic for decisions such as scale-out, and that integrating/aggregating data from the various ‘authoritative sources’ in application is key to making informed decisions.” Yes, that’s exactly what I was saying, only not quite so well. Joel went on to direct my attention to one of his recent blog posts on the subject, detailing how the F5MP does exactly that. Given that Joel already did such an excellent job of explaining the solution and what it can do, I’ve summarized the main metrics available here but will let you peruse his blog entry for the meaty details (including some very nice network diagrams) and links to download the extension (it’s free!), video tutorials, and the F5 Management Pack Application Designer Wiki Documentation.161Views0likes0CommentsDevelopment Performance Metrics Will Eventually Favor Cost per Line of Code
It is true right now that for the most part, virtualization changes deployment of applications but not their development. Thus far this remains true, primarily because those with an interest in organizations moving to public cloud computing have reason to make it “easy” and painless, which means no changes to applications. But eventually there will be changes that are required, if not from cloud providers then from the organization that pays the bills. One of the most often cited truism of development is actually more of a lament on the part of systems’ administrators. The basic premise is that while Moore’s Law holds true, it really doesn’t matter because developers’ software will simply use all available CPU cycles and every bit and byte of memory. Basically, the belief is that developers don’t care about writing efficient code because they don’t have to – they have all the memory and CPU in the world to execute their applications. Virtualization hasn’t changed that at all, as instances are simply sized for what the application needs (which is a lot, generally). It doesn’t work the other way around. Yet. But it will, eventually, as customers demand – and receive - a true pay-per-use cloud computing model. The premise of pay-for-what-you-use is a sound one, and it is indeed a compelling reason to move to public cloud computing. Remember that according to IDC analysts at Directions 2010, the primary driver for adopting cloud computing is all about “pay per use” with “monthly payments” also in the top four reasons to adopt cloud. Luckily for developers cloud computing providers for the most part do not bill “per use”, they bill “per virtual machine instance.”157Views0likes1CommentCorporate Blogging: The Fallacy of Quantity vs Quality
As a corporate blogger I rarely post "off topic". There's a reason for that, and a reason why I'm doing so now. The core reason for doing so now is that it's a subject that's near and dear to me, having spent the majority of the past eight years writing and blogging in publishing and on the corporate side of the table, and I see far too many posts out there offering advice about blogging that's focused solely on "getting more hits". While that might be sound advice for personal blogs, it's off-key when it comes to corporate efforts. There is a belief, and it's wrong, that more is better - whether it's more posts or more hits - when it comes to corporate blogging. In fact, the opposite is true: quality is more important - whether it's readers or posts - than quantity. To understand the fallacy of quantity vs quality you first have to understand the history of trade publishing, and why it's suffered so much financial pain. Don touched on this briefly, having also spent a lot of time in the publishing industry (we like to work together, thank you, I know it's weird, but that's the way we are) but I'm going to expand further on the topic. Back in the old days (print) trade publications and, if we're honest, newspapers, were all based on one of three revenue models: advertising, subscriptions, or a hybrid of both. Magazines that subsisted on advertising only managed to do so by qualifying their circulation base, thus ensuring advertisers that they were paying those high rates because the reader-based was primarily their target market. When the Web exploded everyone demanded "free" content, including from trade publications and newspapers. The publishing industry was a bit confused and wasn't certain how to respond to the move to the web because the revenue model wasn't the same. An anonymous page view of an article is hardly equivalent to a well-qualified reader, and thus advertising revenue on the web was seriously impacted. Advertisers were no longer willing to pay the same rate for "views" because they couldn't be certain of the value of that page view; they couldn't qualify it as being part of their target market. Advertising rates plummeted, and trade publications - and newspapers - began to drop faster than the waistlines of girls' jeans over the past few years. The publishing industry as a whole floundered for a time, until it started to implement more gated content. Gated content requires you to provide certain pieces of information during the registration process before you're allowed to see the content. Some of that information is, not coincidentally, similar to that traditionally found on a qual card - the card you filled out to see if you're qualified for a "free" subscription to a trade publication. This model breathed new life into publishing, as advertisers are much more willing to sponsor micro-sites or pay higher rates for advertisements on specific types of gated content because they are more confident about the quality of the page view. Corporate blogging is becoming nearly a mandate for many organizations. Its value in promoting brand awareness, thought leadership, and market education cannot - and should not - be underestimated. But it is easy to fall into the trap of correlating quantity of hits to success; e.g. a thousand hits on a blog post is better than a hundred hits on a blog post, posting every day is better than two or three times a week. Quantity is often considered more important than quality. As the publishing industry has come to understand, and as corporations should already know because they drove the industry to understand it, the quantity of page views is less relevant than the quality of the reader, and a few good posts are better than many mediocre or irrelevant posts. It's actually fairly easy to write a post that will make the front page of Digg, or make it onto Slashdot and generate a ton of hits. Unfortunately for most corporate bloggers the kinds of posts that generate that kind of traffic and interest are rarely related to their industry and thus do not forward corporate blogging goals of brand awareness, education, or thought leadership which, in most cases, should be relevant to the industry in which a corporation operates. Unfortunately, a post exhorting the benefits of a CRM or an application delivery controller or a BI suite are just unlikely to engender that kind of attention. Relevant, engaging content that educates and forwards corporate goals should be the goal for corporate blogging efforts. Hit counts, while certainly nice, have been proven by the trials and tribulations of the publishing industry to be an unreliable measure of success and do little for the corporation unless it's well understood where the hits are coming from. Yes, writing relevant content often results in a lower hit count, one of the challenges discussed by Jeremiah Owyang in "The Many Challenges of Corporate Blogging". I write primarily on the subject of application delivery - from security to optimization to acceleration. It isn't, for the most part, controversial, nor is it as exciting as politics so its reach and audience is much smaller than, say, something of interest to the masses. But I've learned from long experience in publishing hits from the masses aren't likely to help "forward the cause". A page view from Sally in finance is unlikely to ever really be of value because she isn't involved in IT, would likely not understand the relevance of application delivery to the applications she uses at work, and isn't likely to discuss high availability or load balancing with the guys in IT or even be able to suggest or influence the option - she probably doesn't even know IT is looking into it. The page view from Sally is virtually worthless in terms of achieving corporate goals. The problem is that it's impossible to know if a page view came from Sally or from the CIO or IT manager responsible for architecting an application delivery network. Targeted, relevant content does a much better job of qualifying readership than general, unrelated topics. Readers of a post on cloud computing or virtualization are likely to be interested in the technology and thus their hits are both valuable and desired. But what about brand awareness? Don't we want to get our brand "out there"? Yes, and no. You want your brand out there, certainly, but you want it out there amidst people who will actually do something with that knowledge. You want to attract and educate non-customers who could be customers, not non-customers who will never, ever in a million years be customers. Mass advertising and blogging might work for a brand like GM or Apple, whose products are targeted at, well, everyone. But while John Q. Farmer might enjoy listening to an iPod while he's out riding his combine, he isn't likely to give a hoot about application delivery or information security or how awesome the latest SSL VPN might be. Blogs cannot - and should not - go the way of traditional publishing. We can't gate the content, that does us and readers a disservice. But in order to quantify success of corporate blogging initiatives it is important to qualify, somehow, whether we're reaching the audiences we want to reach. The best way to do that is to artificially gate readership through relevant, quality posts. Choose quality over quantity. Qualify through relevancy. Let's not repeat the painful process publishing had to experience to arrive where we're at today. Don't get sidetracked from your goals by lower hit counts than you'd hoped. If you're writing quality posts and seeing little growth, you may need to reach out to your audiences rather than let them come to you. Syndication, participation in appropriate social networking sites, link and bookmark sharing, etc... are all ways to reach out to and get your content in front of the appropriate audiences. What you want to see is consistent growth - even if it's small - over time in not only hit counts but referrals and returning and new visitors as well as lower exit and bounce rates. Hit count is only one factor that contributes to a complex calculation quantifying "success". As long as you're staying on focus and growing, you're doing it right and adding value and you can be more sure that the hits you are getting are worth the effort you're putting forward.215Views0likes1Comment