Showing results for 
Search instead for 
Did you mean: 

Use F5 REST API Python SDK to update an external datagroup


I have working code using the F5 Python REST SDK that can upload a txt file and create an external datagroup from it.   

But I cannot figure out how to use the API to update that datagroup file without deleting the datagroup and recreating it.


All of the examples I can find (e.g. do the following:

  • create a datagroup file object from a sourcepath
  • create a datagroup object from the datagroup file object
  • display the created object's attributes

Then they delete the datagroup object and datagroup file object so that they can run again.  

I cannot find a way to reload the data from an updated data file into an existing datagroup object.

I have also seen code that can walk a datagroup and update items in it one by one, but I'd really like to use a file update model.


Any ideas?  Code sample follows:


from f5.bigip import ManagementRoot # Connect to the BigIP mgmt = ManagementRoot("IP-FIXME", "USER-FIXME", "PASSWD-FIXME") def showDatagroups() :   print("\n\n**** Showing Datagroups")   dgs =   for idx, dg in enumerate(dgs):     print("\n{}: {}".format(idx, dg.raw))     print("\n{}: {}".format(idx,     if hasattr(dg, 'records'):       print("\n{}: {}".format(idx, dg.records))       for record in dg.records:         print("\nrec: {}".format(record))     else:       print("\nObject {} has no records".format( def showDatagroupFiles():   print("\n\n**** Showing DatagroupFiles")   dgFiles =   for idx, f in enumerate(dgFiles):     print('\n{}: {}'.format(idx, f.raw)) def uploadFile(f):   # This upload works - it places the file in the uploads folder.   # But I cannot seem to access the file uploaded file with the datagroup create method   print("\n\n**** Uploading datagroup file {}".format(f))   mgmt.shared.file_transfer.uploads.upload_file(f) def createDatagroupFile(source_path, dgname, dataType):   print("\n\n**** Creating datagroup file {}, name {}, type {}".format(fname, dgname, dataType))   # check first if its there   dgFiles =   found = 0   for idx, f in enumerate(dgFiles):     if == dgname:       found = 1       break   if found:     print("File {} already exists".format(dgname))     #  Here's where I would just update the file if I could   else:     print("Creating DG File {} from fname {}".format(dgname, sourcePath))     dgFile =, name=dgname, type=dataType) def createDatagroupFromFile(name, file):   print("\n\n**** Creating datagroup {} from file {}".format(name, file))   # check first if its there   dgs =   found = 0   for idx, dg in enumerate(dgs):     if == name:       found = 1       break   if found:     print("Datagroup {} already exists".format(name))     #  Here's where I would just update the datagroup if I could   else:     dgObject =, externalFileName=file) if __name__ == "__main__":   fname = './dg_test5.txt'   # using HTTP server for raw file - this is the only way I could get the upload to work   sourcePath = ''   dgfilename = 'dg_test5.txt'   dgname = 'dg_test5'   uploadFile(fname)   createDatagroupFile(sourcePath, dgfilename, 'string')   createDatagroupFromFile(dgname, dgfilename)   showDatagroups()   showDatagroupFiles()

F5 Employee
F5 Employee

Assuming you have both data-group file object (tmsh list sys file data-group <object>) and external data-group ltm object (tmsh list ltm data-group external <object>), then sending a PATCH request to the file object would override the file contents. The ltm object is just a reference to the file, hence the new data becomes automatically available from iRule.


For example,

1) Before the change, the external data-group file contained the following data (see the bottom for the test iRule code)

# Edited for readability 16:02:19 /Common/ExternalDg <HTTP_REQUEST>: {Shota} Short 16:02:19 /Common/ExternalDg <HTTP_REQUEST>: {Ochako} Uravity 16:02:19 /Common/ExternalDg <HTTP_REQUEST>: {Izuku} Deku 16:02:19 /Common/ExternalDg <HTTP_REQUEST>: {Tenya} Ingenium

2) Update the file by calling PATCH /mgmt/tm/sys/file/data-group/<object>.

curl -sku $PASS \ https://$HOST/mgmt/tm/sys/file/data-group/dg-testMHA.txt \ -X PATCH -H "Content-type: application/json" \ -d '{"sourcePath":""}'

where dg-testHMA.txt is the existing file object. Note that the file contains all the records. When the call successfully completes, you will see the following messages in the ltm log:

# Edited. The same messages repeat because I have two TMMs. 16:04:24 tmm1 External Datagroup (/Common/dg-testMHA) queued for update. 16:04:24 tmm External Datagroup (/Common/dg-testMHA) queued for update. 16:04:25 tmm1 External Datagroup (/Common/dg-testMHA) update finished. 16:04:25 tmm External Datagroup (/Common/dg-testMHA) update finished.

3) Check the data contents again. New data is added.

# Edited. I do not know why the order is different. 16:04:35 /Common/ExternalDg <HTTP_REQUEST>: {Tsuyu} Floppy 16:04:35 /Common/ExternalDg <HTTP_REQUEST>: {Tenya} Ingenium 16:04:35 /Common/ExternalDg <HTTP_REQUEST>: {Ochako} Uravity 16:04:35 /Common/ExternalDg <HTTP_REQUEST>: {Shota} Short 16:04:35 /Common/ExternalDg <HTTP_REQUEST>: {Izuku} Deku

Appendix: Test iRule code

when HTTP_REQUEST { foreach {record} [class get dg-testMHA] { log local0. "$record" } }


Thanks Satoshi! I was able to employ patch as you suggested using this Python code:



def update_edg_file(rq, url, fileserver, newfilename, dgfname): path = fileserver + "/" + newfilename payload = {} payload['sourcePath'] = path   req = '{}/sys/file/data-group/{}'.format(url, dgfname) resp = rq.patch(req, json.dumps(payload)) print("update_edg_file, name: {}, PATCH response: {}".format(dgfname, resp))   # in main ... url_base = 'https://%s/mgmt/tm' % hostname rs = requests.session() rs.auth = (username, password) rs.verify = False rs.headers.update({'Content-Type':'application/json'}) .... update_edg_file(rs, url_base, external_file_server, filename, edg_file_name)

The update is pretty quick.