How to Access Production Claims Data
The production environment offers access to enrollee claims data, which contains Protected Health Information (PHI). In order to get enrollee claims data, Prescription Drug Plan (PDP) sponsors must have completed the steps for production access beforehand.
AB2D recommends using V2 of the API
Version 2 is the current version and it follows the FHIR R4 standard. The _until parameter is only available with V2. Version 1 follows the FHIR STU3 standard.
Instructions
The production and sandbox environments use the same endpoints and workflow. You can follow the same steps as you did in the sandbox for production data.
You’ll still need a bearer token to call the API, but use the production identity provider (idp.cms.gov), production URL (api.ab2d.cms.gov), and credentials issued by the AB2D team instead.
- Start a job:
GET /api/v2/fhir/Patient/$export
- Retrieve metadata:
GET /api/v2/fhir/metadata
- Cancel job:
DELETE /api/v2/fhir/Job/{job_uuid}/$status
- Check the job status:
GET /api/v2/fhir/Job/{job_uuid}/$status
- Download files:
GET/api/v2/fhir/Job/{job_uuid}/file/{file_name}
Job expiration
Job IDs and file URLs expire after 72 hours or 6 downloads. If it takes more than 30 hours for a job to complete, the request will time out and fail. Reduce file sizes and download times by using parameters to filter the claims data returned.
Format
Files are in NDJSON format, where each line is a Medicare claim written in JSON. The file naming standard uses a contract identifier and number to indicate sequence (e.g., Z123456_0001.ndjson).
Sample client scripts
AB2D provides sample client scripts, but we encourage organizations to automate their individual processes. Sample scripts are not suitable for long-term use as they do not provide sufficient error checking, security, or auditing capabilities. You must run these scripts in a secure environment to protect PHI.
If you have multiple contracts, you can run scripts for them at the same time. However, you must use different directories, credentials, and terminals so each will have defined environment variables.
Bash client
Download a ZIP file of the Bash API repository or learn how to clone it. Then, unzip or move the files into a specified directory (e.g., /home/abcduser/ab2d). Copy the script files to that directory.
I. Set up the environment
-
Open a Bash shell to run your commands. Environment variables will only be valid inside this shell. Do not close this terminal before the download is complete.
-
Go to the directory where the script files are located. Perform a
ls
to make sure the 4 scripts are there. -
Run the following command using the Base64 credential file you created for your bearer token (e.g., /home/abcduser/credentials_Z123456_base64.txt).
-
Verify that the command worked and defined the correct environment variables:
II. Start a job
- Request the Export endpoint in the same window used to create the environment variables and in the same directory where the scripts are located.
- Verify that a file named “jobId.txt” was created.
III. Check the job status
Use the same shell to check the job status until it is complete. You do not have to run this script more than once (unless you exit it) as it will pause and recheck automatically:
This script will create a file named response.json in the directory. This will contain all the requested files. Files have a max size, so if the data exceeds that size, a new file will be created.
IV. Download the files
Once the job is complete, download the files from the same shell.
Windows Powershell client
Download a ZIP file of the Powershell API repository or learn how to clone it. Then, open a Powershell terminal and go to the home directory. You can run the dir
command to check for the (.ps1) and README Powershell scripts. Note that the Windows client will always download files to your current working directory, so you may want to move its existing contents elsewhere during the job.
I. Prepare the environment variables
- Open a PowerShell terminal and go to the directory that contains the scripts.
-
Set the authentication location as the Base64 credential file you created for your bearer token (e.g., /home/abcduser/credentials_Z123456_base64.txt):
-
Set the authentication URL as AB2D’s production identity provider:
-
Set the API URL as the production environment URL:
II. Start and monitor a job
-
This script will use the environment variables to request data and check when the job is complete:
-
Check the contents of the variable JOB_RESULTS. It will contain the list of files to download. Leave the shell open for the next step.
III. Download the files
Download the files into your current directory.
Python client
Download a ZIP file of the Python API repository or learn how to clone it. Then, open a terminal and verify the file’s location in your home directory with the dir
or ls
command. You should see the job-cli.py script and a README. Make sure you have at least Python 3.6 and PIP3 installed on your system.
I. Prepare environment variables
-
In a Bash Shell or PowerShell terminal, go to the directory with the script file.
-
Set the authorization file as the Base64 credential file you created for your bearer token (e.g., /home/abcduser/credentials_Z123456_base64.txt):
-
Linux/Mac
- Windows
-
-
Set the directory variable as the location to save your exported files. Leave the shell open for the next step.
-
Linux/Mac
-
Windows
-
II. Start a job
-
Make sure the Python command is mapped to the correct version (3.6 or higher):
-
Start the export job. You can optionally use parameters to filter the data returned.
-
Linux/Mac
-
Windows
-
-
Verify that a job_id.txt file was created. The file should contain a job ID (e.g., 133039b8-c74c-422f-8836-8335c13f5a8d).
-
Linux/Mac
-
Windows
-
III. Check the job status
-
Check the job status in the same shell used to prepare the environment variables. If the script fails for any reason, restart the command. The script can be run as many times as necessary:
-
Linux/Mac
-
Windows
-
-
When the job is complete, the script will automatically save a list of files to download. Confirm the location of the files:
-
Linux/Mac
-
Windows
-
IV. Download the files
-
Download the files in the same shell used to prepare the environment variables.
-
Linux/Mac
-
Windows
-
-
List the files that you have downloaded and verify their location:
-
Linux/Mac
-
Windows
-
Clean up your directory
After you download your files, clean up your directory as needed. You can move the NDJSON files to another directory and remove script-generated files (e.g., “jobID.txt”). If the job is re-run, the new files may interfere and overwrite the old files.
Incremental export model
Incremental exports of data, ideally at a bi-weekly frequency, reduce data duplication and speed up job times. This allows you to request newly updated data that you haven’t downloaded since your last export. Learn more about the incremental export model.
Troubleshooting
Review our Troubleshooting Guide. If you need additional assistance, email the AB2D team at ab2d@cms.hhs.gov.
When contacting our team, please include the following information:
- Your operating system (e.g., Windows, Linux)
- Your system’s IP address
- If applicable, your HTTP response code (e.g., 403, 400)
- A description of the issue including what stage of the process you’re on
- Any logs that may help us in resolving the issue. Use caution when sharing any log files as they may contain sensitive information.
Please review all encoded content and/or logs before sharing with the team to ensure they do not contain sensitive details.