Lightboard Lessons: Secure Data Tokenization

Companies that are subject to security audits, such as PCI DSS, could benefit from a solution that takes sensitive information and moves it from their web servers to the enterprise edge or ingress point and thus reduces the exposure of live and regulated data on the internal IT network.  In this edition of Lightboard Lessons, I walk through a creative solution that was developed by F5's own Adam Auerbach and Ziv Saar (with iRule support from Rodney Newton), and this solution can significantly reduce the cost and complexity of sensitive data audits.  

 


 

Related Resources:

Secure Data Tokenization White Paper

Published Aug 03, 2016
Version 1.0
  • Great john ! Very interesting . Waiting for irules that does the job . One question : any decryption mechanism occurs server side to meet PCI DSS require, right ? So if the sensitive data reach the server in an obfuscated or even encrypted format , nothing could be done to this data in terms of validation / transactions /verification, i mean these data are finally useless . So why do some organisation or website require users to post it.

     

    Sorry if i misunderstand something john .

     

  • Hi John! I have been looking at the example in the SDT white paper and if I'm not wrong there seems to be some pieces missing in the code examples. In the Extractor iRule there is a connect but no send/recieve... Any comments to that?

     

  • @Fulmetal, great question! In this scenario, a company that processes sensitive data (credit card numbers, personal information, etc) would need to ensure that all their web servers are compliant with all the PCI/DSS requirements. That can be a daunting task. Because of the nature of what these companies do (banks, online marketplaces, etc), they have a valid business requirement to ensure their users can post this sensitive data. The solution outlined here allows these same companies the ability to move all the PCI/DSS compliance off their web servers and onto the BIG-IP. This significantly streamlines all the compliance requirements they have to do.

     

  • @Mats Nystrom, I reached out to the guys who contributed to the code and here's some feedback from them:

     

    We are not supposed to have a send or receive, the request is being parsed and we are simply releasing the TCP connection:

     

    TCP::payload replace 0 [TCP::payload length] $newdata3 
    TCP::release
    
    Then parsing it back in the response from voltage:
    when HTTP_RESPONSE {  Collect HTTP response data
    
    by the way, there is a small typo there in the example as you are saying you are ending CLIENT_DATA and you are only ending the IF statement … that might throw people off:
    
    
    when CLIENT_DATA { 
    if {[TCP::payload] contains "GET /tokenize?"} { 
     Optional logging for debugging log local0. "Tokenization GET request" 
    } else { log local0. "Some other request - let it through" TCP::release return 
    
    } end when CLIENT_DATA 
    
     get the query string: this what we want to encrypt set plaintext [findstr [TCP::payload] "data=" 5 "&"]
    

    Let me know if you have any other questions. Thanks!

     

  • Hi John!

     

    Thanks for your answer!

     

    I was refering to the Intercept_CC iRule that does the extraction and the sideband connection to the encryptor. I still think it needs send and recv. But on the other hand I might be wrong!

     

    (Adding some lines of code to make it more understandable)

     

     Build simple GET request to encrypt credit card number
    set tokenizationRequest "GET /tokenize?data=$ccNum HTTP/1.1\r\nUserAgent: AlmostCurl!\r\nHost: ${static::HostString}\r\nAccept: */*\r\n\
    r\n"
    
     Connect to encryptor virtual server
    set TokenServer [connect -protocol TCP -myaddr $static::myaddr -timeout
    100 -idle 5 -status connect_status $static::TokenizationVirtualServer]
    
    
     Adding "send and recv" that I think is missing in whitepaper
    set send_info [send -timeout 3000 -status send_status $TokenServer $tokenizationRequest]
     Not a very robust recv, needs adjustments...
    set recv_data [recv -timeout 3000 $TokenServer]
    
    
    set Token [findstr $recv_data "" 7 ""]
    set TokenizedData "$FirstPartOfInboundRequest$Token$SecondPartOfInbound
    Request"