database
25 TopicsF5 force connection reset on pool member
I have two MySQL server behind F5 and i am using Priority group for active-passive so request will go to primary everytime until it's down. Question: primary failed and all my connection going to standby box - Good now primary is back - but still my client connected to standby DB ( because of persistent connection ) How do i force my pool member to tell drop all connection and go to new primary DB? problem is if some client keep writing data to standby it will create some kind of issue. force offline doesn't drop active connection2.6KViews0likes5CommentsBig IQ High Availability Configuration Fails
I have two Big IQ Virtual Machines built from the Big IQ 7.1.0.1-133 software. There are no firewalls between the two, but they do reside on two separate VM hosts. When I configure the High Availability with no auto failover (no DCDs deployed yet), the process hangs at "Configuring Database Replication" for a long time but it will eventually say the process is complete. At that point, the VM that should be the primary says it is still in Standalone mode and the VM that is secondary has the configuration replicated but the web GUI doesn't come up. The reason I know the configuration is replicated, is because when I reboot the hosts and check the High Availability settings, the primary says it is in a failed High Availability state and the secondary is in a successful state. there is also an error on the primary stating "Secondary is not actively streaming. Postgres may be down.". I checked the logs for Postgres and Tokumonx and I'm seeing fatal errors. Specifically, that the role "root" doesn't exist. My question: where else should I be looking to troubleshoot this issue? Has anyone run into this issue and just know the answer? Thank you680Views0likes1CommentDataBase not found in DataSource error in Grafana Dashboard for F5 DDoS
Hello folks, I followed the procedure to install the F5 app in Grafana which is located in https://github.com/f5devcentral/f5-bados-app. However, I am having an error which says DataBase not found in DataSource every time I try to create a data source in the Grafana admin website. I am putting the word "default" in the ADMdb details Database field. I don't know where to manually create or copy the database. Thanks JM1.1KViews0likes1CommentTNS LISTENER ERROR ON ORACLE RAC VIRTUAL SERVER IRULE
What steps should be listed in the Oracle RAC irule, when load balancing oracle db servers. I get the following error on my VIP (Green) (10.1.232.89:1521). Connection test failed. Listener refused the connection with the following error: ORA-12516, TNS:listener could not find available handler with matching protocol stack The Connection descriptor used by the client was: (description= (address=(protocol=tcp)(host=10.1.232.89)(port=1521))(connect_data=(SERVICE_NAME=tpp_n4))) oracle.jdbc.driver.SQLStateMapping.newSQLException(SQLStateMapping.java:70) oracle.jdbc.driver.DatabaseError.newSQLException(DatabaseError.java:112) oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:173) oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:460) oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:411) oracle.jdbc.driver.PhysicalConnection.(PhysicalConnection.java:490) oracle.jdbc.driver.T4CConnection.(T4CConnection.java:202) oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:33) oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:474) com.bea.console.utils.jdbc.JDBCUtils.testConnection(JDBCUtils.java:505) com.bea.console.actions.jdbc.datasources.createjdbcdatasource.CreateJDBCDataSource.testConnectionConfiguration(CreateJDBCDataSource.java:369) sun.reflect.GeneratedMethodAccessor797.invoke(Unknown Source) sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) java.lang.reflect.Method.invoke(Method.java:597) org.apache.beehive.netui.pageflow.FlowController.invokeActionMethod(FlowController.java:870) org.apache.beehive.netui.pageflow.FlowController.getActionMethodForward(FlowController.java:809) org.apache.beehive.netui.pageflow.FlowController.internalExecute(FlowController.java:478) org.apache.beehive.netui.pageflow.PageFlowController.internalExecute(PageFlowController.java:306) org.apache.beehive.netui.pageflow.FlowController.execute(FlowController.java:336) org.apache.beehive.netui.pageflow.internal.FlowControllerAction.execute(FlowControllerAction.java:52) ... However,all the three DB servers connection test from Oracle Server are fine, 10.1.232.191,10.1.232.192,10.1.232.193. The service name is tpp_n4, and the irule is in the comments452Views0likes1CommentThe Challenges of SQL Load Balancing
#infosec #iam load balancing databases is fraught with many operational and business challenges. While cloud computing has brought to the forefront of our attention the ability to scale through duplication, i.e. horizontal scaling or “scale out” strategies, this strategy tends to run into challenges the deeper into the application architecture you go. Working well at the web and application tiers, a duplicative strategy tends to fall on its face when applied to the database tier. Concerns over consistency abound, with many simply choosing to throw out the concept of consistency and adopting instead an “eventually consistent” stance in which it is assumed that data in a distributed database system will eventually become consistent and cause minimal disruption to application and business processes. Some argue that eventual consistency is not “good enough” and cite additional concerns with respect to the failure of such strategies to adequately address failures. Thus there are a number of vendors, open source groups, and pundits who spend time attempting to address both components. The result is database load balancing solutions. For the most part such solutions are effective. They leverage master-slave deployments – typically used to address failure and which can automatically replicate data between instances (with varying levels of success when distributed across the Internet) – and attempt to intelligently distribute SQL-bound queries across two or more database systems. The most successful of these architectures is the read-write separation strategy, in which all SQL transactions deemed “read-only” are routed to one database while all “write” focused transactions are distributed to another. Such foundational separation allows for higher-layer architectures to be implemented, such as geographic based read distribution, in which read-only transactions are further distributed by geographically dispersed database instances, all of which act ultimately as “slaves” to the single, master database which processes all write-focused transactions. This results in an eventually consistent architecture, but one which manages to mitigate the disruptive aspects of eventually consistent architectures by ensuring the most important transactions – write operations – are, in fact, consistent. Even so, there are issues, particularly with respect to security. MEDIATION inside the APPLICATION TIERS Generally speaking mediating solutions are a good thing – when they’re external to the application infrastructure itself, i.e. the traditional three tiers of an application. The problem with mediation inside the application tiers, particularly at the data layer, is the same for infrastructure as it is for software solutions: credential management. See, databases maintain their own set of users, roles, and permissions. Even as applications have been able to move toward a more shared set of identity stores, databases have not. This is in part due to the nature of data security and the need for granular permission structures down to the cell, in some cases, and including transactional security that allows some to update, delete, or insert while others may be granted a different subset of permissions. But more difficult to overcome is the tight-coupling of identity to connection for databases. With web protocols like HTTP, identity is carried along at the protocol level. This means it can be transient across connections because it is often stuffed into an HTTP header via a cookie or stored server-side in a session – again, not tied to connection but to identifying information. At the database layer, identity is tightly-coupled to the connection. The connection itself carries along the credentials with which it was opened. This gives rise to problems for mediating solutions. Not just load balancers but software solutions such as ESB (enterprise service bus) and EII (enterprise information integration) styled solutions. Any device or software which attempts to aggregate database access for any purpose eventually runs into the same problem: credential management. This is particularly challenging for load balancing when applied to databases. LOAD BALANCING SQL To understand the challenges with load balancing SQL you need to remember that there are essentially two models of load balancing: transport and application layer. At the transport layer, i.e. TCP, connections are only temporarily managed by the load balancing device. The initial connection is “caught” by the Load balancer and a decision is made based on transport layer variables where it should be directed. Thereafter, for the most part, there is no interaction at the load balancer with the connection, other than to forward it on to the previously selected node. At the application layer the load balancing device terminates the connection and interacts with every exchange. This affords the load balancing device the opportunity to inspect the actual data or application layer protocol metadata in order to determine where the request should be sent. Load balancing SQL at the transport layer is less problematic than at the application layer, yet it is at the application layer that the most value is derived from database load balancing implementations. That’s because it is at the application layer where distribution based on “read” or “write” operations can be made. But to accomplish this requires that the SQL be inline, that is that the SQL being executed is actually included in the code and then executed via a connection to the database. If your application uses stored procedures, then this method will not work for you. It is important to note that many packaged enterprise applications rely upon stored procedures, and are thus not able to leverage load balancing as a scaling option. Depending on your app or how your organization has agreed to protect your data will determine which of these methods are used to access your databases. The use of inline SQL affords the developer greater freedom at the cost of security, increased programming(to prevent the inherent security risks), difficulty in optimizing data and indices to adapt to changes in volume of data, and deployment burdens. However there is lively debate on the values of both access methods and how to overcome the inherent risks. The OWASP group has identified the injection attacks as the easiest exploitation with the most damaging impact. This also requires that the load balancing service parse MySQL or T-SQL (the Microsoft Transact Structured Query Language). Databases, of course, are designed to parse these string-based commands and are optimized to do so. Load balancing services are generally not designed to parse these languages and depending on the implementation of their underlying parsing capabilities, may actually incur significant performance penalties to do so. Regardless of those issues, still there are an increasing number of organizations who view SQL load balancing as a means to achieve a more scalable data tier. Which brings us back to the challenge of managing credentials. MANAGING CREDENTIALS Many solutions attempt to address the issue of credential management by simply duplicating credentials locally; that is, they create a local identity store that can be used to authenticate requests against the database. Ostensibly the credentials match those in the database (or identity store used by the database such as can be configured for MSSQL) and are kept in sync. This obviously poses an operational challenge similar to that of any distributed system: synchronization and replication. Such processes are not easily (if at all) automated, and rarely is the same level of security and permissions available on the local identity store as are available in the database. What you generally end up with is a very loose “allow/deny” set of permissions on the load balancing device that actually open the door for exploitation as well as caching of credentials that can lead to unauthorized access to the data source. This also leads to potential security risks from attempting to apply some of the same optimization techniques to SQL connections as is offered by application delivery solutions for TCP connections. For example, TCP multiplexing (sharing connections) is a common means of reusing web and application server connections to reduce latency (by eliminating the overhead associated with opening and closing TCP connections). Similar techniques at the database layer have been used by application servers for many years; connection pooling is not uncommon and is essentially duplicated at the application delivery tier through features like SQL multiplexing. Both connection pooling and SQL multiplexing incur security risks, as shared connections require shared credentials. So either every access to the database uses the same credentials (a significant negative when considering the loss of an audit trail) or we return to managing duplicate sets of credentials – one set at the application delivery tier and another at the database, which as noted earlier incurs additional management and security risks. YOU CAN’T WIN FOR LOSING Ultimately the decision to load balance SQL must be a combination of business and operational requirements. Many organizations successfully leverage load balancing of SQL as a means to achieve very high scale. Generally speaking the resulting solutions – such as those often touted by e-Bay - are based on sound architectural principles such as sharding and are designed as a strategic solution, not a tactical response to operational failures and they rarely involve inspection of inline SQL commands. Rather they are based on the ability to discern which database should be accessed given the function being invoked or type of data being accessed and then use a traditional database connection to connect to the appropriate database. This does not preclude the use of application delivery solutions as part of such an architecture, but rather indicates a need to collaborate across the various application delivery and infrastructure tiers to determine a strategy most likely to maintain high-availability, scalability, and security across the entire architecture. Load balancing SQL can be an effective means of addressing database scalability, but it should be approached with an eye toward its potential impact on security and operational management. What are the pros and cons to keeping SQL in Stored Procs versus Code Mission Impossible: Stateful Cloud Failover Infrastructure Scalability Pattern: Sharding Streams The Real News is Not that Facebook Serves Up 1 Trillion Pages a Month… SQL injection – past, present and future True DDoS Stories: SSL Connection Flood Why Layer 7 Load Balancing Doesn’t Suck Web App Performance: Think 1990s.2.3KViews0likes1Comment