In R80 and above, Check Point has introduced a management API that offers a whole new realm of possibilities for managing policies. This also rendered many traditional tools that we might have considered for doing this type of work obsolete (such as cp_merge). The availability of the API has introduced a number of options for using scripts or third-party tools to manage and migrate your firewall policy.
Suppose that you have two Check Point management servers that you want to merge. One is running R80 and the other is running R77.30. You want to eventually end up with a single management server, but don’t want to have to manually copy everything over. What do you do?
That was the same question I faced a short time ago. I recently needed to perform this type of migration, combining the policies of two management servers in order to migrate the management of our office firewall cluster from a legacy management server to one running R80 in AWS. This gave me a great opportunity to learn how tow to use the R80 API.
While researching the best way to accomplish the migration, I came across a tool released by the Check Point APIs Team called ExportImportPolicyPackage. This is a python script that leverages the R80.10 API to create a tar.gz file that can be imported into another management server.
The first challenge:
This tool requires R80.10 across the board. The source management server was running R77.30 and the destination one was running R80 in AWS. In order to use the R80.10-compatible tool, I needed to upgrade everything to R80.10 first. To avoid manipulating production systems, I did all this work in VMs in our lab.
The R77.30 management upgrade was relatively straightforward: use the R80.10 migration tools for Pre-R80 Gaia versions (Check_Point_R80.10_migration_tools_PreR80.Gaia.tgz) to run an upgrade_export of the policy on the R77.30 management server, build a VM with a clean install of R80.10, and import the migrated policy. Easy.
The R80 to R80.10 for the destination management server was a bit more interesting. I was originally thinking I could just do an in-place HFA upgrade to R80.10, but since this was running in AWS, this method isn’t supported (sk118717). If you try to run the HFA upgrade, it looks like this:
awsmgmt> installer verify 1 Info: Initiating verify of Check_Point_R80.10_T421_Upgrade_from_R80_FULL.tgz... Interactive mode is enabled. Press CTRL + C to exit (this will not stop the operation) Result: Installation is not allowed. Reason: Details: Operation failed. Upgrades in Public Cloud environments (AWS, Microsoft Azure and Google Cloud Platform) are not supported. Found the following file exist on the machine: /etc/in-aws
(Full disclosure - I really wanted to see what would happen if I deleted /etc/in-aws, but I didn’t want to end up with an unsupported configuration for our production environment).
To make this more interesting/annoying, there are not currently any published migration tools for migrating a R80 management server from R80 to R80.10. The R80.10 migration in the first column only works on R80.10 systems, the Pre-R80 migration tools only work on R77.30 and older (and not on R80, I tried). I asked for some advice on what to use, and the SecurePlatform/Linux package was suggested, but that only works on R77.30 and older as well.
Without a good in-place upgrade option, I built a R80 VM in the lab, imported the production policy, did an HFA upgrade to R80.10, and then ultimately used this system to do the policy migration and produce a final export which I could load into a new R80.10 AWS management server.
The next challenge - getting the API working. New versions of Check Point actually come with a version of Phython, and the ExportImportPolicyPackage looks like it could run locally on a management server due to some of the flags in the help output. However, the version of Python in Gaia was missing some functionality used by the script, so that was not possible. Running this remotely from my machine was the best option. However, the management API doesn’t allow remote access by default. To enable API access, open SmartConsole and navigate to Manage and Settings -> Blades -> Management API -> Advanced Settings. If this setting is changed, you will need to restart the API by SSHing into the management server and running the api restart command.
To confirm that the API is usable and available remotely, run the api status command. If Accessibility shows “Require all granted” it means that any system can access the API (on R80 this will show “Allow all”).
[Expert@awsmgmt:0]# api status API Settings: --------------------- Accessibility: Require all granted Automatic Start: Enabled Processes: Name State PID More Information ------------------------------------------------- API Started 14472 CPM Started 14350 Check Point Security Management Server is running and ready FWM Started 13807 Port Details: ------------------- JETTY Internal Port: 50276 APACHE Gaia Port: 443 -------------------------------------------- Overall API Status: Started -------------------------------------------- API readiness test SUCCESSFUL. The server is up and ready to receive connections Notes: ------------ To collect troubleshooting data, please run 'api status -s <comment>'
At this point, we can test exporting the policy from the administrator workstation running Python. For this case, the command that worked best was this:
python import_export_package.py -m <managment server IP> -op export -u <username> -n <policy package> --ac --debug on
This command exports the access control policy, which was what we were interested in duplicating for this exercise. As the export runs, it will output lists of rules and objects as it is running. Keep an eye on this while it is running to see if there are any issues. You will see output similar to what is seen below (abbreviated):
Exporting Access Control layers Exporting Access Layer [Network] Retrieved X out of X rules (2%) .... Retrieved X out of X rules (100%) Processing rules and sections Exporting access-roles from layer [Network] Exporting services-udp from layer [Network] Exporting groups from layer [Network] Exporting hosts from group [Group Name] .... Exporting layer settings of layer [Network] Done exporting layer 'Network'.
Once this is completed, review the console output for any issues. In our case, everything looked good, so I went ahead and attempted to import the policy to the new management server with the following command:
$ python import_export_package.py -m <new management server IP> -op import -u <username> -n Office --all --debug on -f exported__package__<policyname>__2017_10_17_13_06.tar.gz
Of course, this failed. Looking at the logs for the API, it seemed as if the publish operation was failing. Since the import wasn’t working successfully, I wanted to determine if I could successfully publish anything from the API, even using the CLI on the local management server. It turns out I couldn’t:
[Expert@testmgmt:0]# mgmt_cli add host name test_host ip-address 188.8.131.52 Username: fwadmin Password: --------------------------------------------- Time: [13:34:53] 17/10/2017 --------------------------------------------- "Publish operation" failed (100%) Publish failed, the following error has occurred: tasks: - task-id: "01234567-89ab-cdef-ab7f-5122bc49f89f" task-name: "Publish operation" status: "failed" progress-percentage: 100 suppressed: false Discarding the changes.
Even this simple operation wasn’t successful, so something wasn’t right. Digging through fwm.elg, I found a java exception indicating that the publish was failing due to a setting in our management server requiring a name and description for each session.
Disabling the “All sessions must have a name and description” option allowed publishing through the API to work successfully, and also for the import through the Python script to run.
Once the import was completed, I noticed a number of caveats that needed manual adjustment. First - Check Point objects such as the cluster in the object don’t migrate over - a placeholder object was created. To fix this, I simply manually created the cluster object to match what was in the old policy. Most rules migrated over fine, although I did need to manually touch any rule where another non-identically named object with an overlapping IP or subnet was created as the tool does not create objects with overlapping networks or IP addresses. This wasn’t a major problem, as it allowed me to identify these overlaps and use the appropriate group from the original policy.
When reviewing the completed policy, be sure to check the following:
- Any rule containing an object named export_error or import_error
Most of these were the result of objects that already existed in the destination policy
- NAT configs
A lot of these didn’t transfer consistently (likely due to object duplication) and needed to be manually confirmed
- VPN tunnels
We didn’t have many VPN tunnels, but they were all re-configured from scratch
- Check Point specific config
Every setting for the new cluster object needed to be manually copied over and verified
Once this is complete, you should be able to cut over the management to the new combined management server.
Hopefully this helps you in the event you face a similar project in the future.