pyBaf2Sql package¶
Submodules¶
pyBaf2Sql.baf module¶
- pyBaf2Sql.baf.close_storage(baf2sql, handle, conn)¶
Close BAF dataset.
- Parameters:
baf2sql (ctypes.CDLL) – Library initialized by pyBaf2Sql.init_baf2sql.init_baf2sql_api().
handle (int) – Handle value for BAF dataset initialized using pyBaf2Sql.baf.open_storage().
conn (sqlite3.Connection) – SQL database connection to analysis.sqlite.
- Returns:
Tuple of the handle and connection.
- Return type:
tuple
- pyBaf2Sql.baf.extract_baf_spectrum(baf_data, frame, mode, profile_bins=0, mz_encoding=64, intensity_encoding=64)¶
Extract spectrum from BAF data with m/z and intensity arrays. Spectrum can either be centroid or profile mode. If “raw” mode is chosen, centroid mode will automatically be used.
- Parameters:
baf_data (timsconvert.classes.TimsconvertBafData) – baf_data object containing metadata from analysis.sqlite database.
frame (int) – Frame to extract spectrum from.
mode (str) – Mode command line parameter, either “profile”, “centroid”, or “raw”.
profile_bins (int) – Number of bins to bin spectrum to.
mz_encoding (int) – m/z encoding command line parameter, either “64” or “32”.
intensity_encoding (int) – Intensity encoding command line parameter, either “64” or “32”.
- Returns:
Tuple of mz_array (np.array) and intensity_array (np.array).
- Return type:
tuple[numpy.array]
- pyBaf2Sql.baf.get_num_elements(baf2sql, handle, identity)¶
Get the number of elements stored in an array.
- Parameters:
baf2sql (ctypes.CDLL) – Library initialized by pyBaf2Sql.init_baf2sql.init_baf2sql_api().
handle (int) – Handle value for BAF dataset initialized using pyBaf2Sql.baf.open_storage().
identity (str | int) – ID of the desired array.
- Returns:
Number of elements in the array of the specified ID.
- Return type:
int
- pyBaf2Sql.baf.get_sqlite_cache_filename(baf2sql, bruker_d_folder_name)¶
Find the filename of the SQLite cache corresponding to the specified BAF file. The SQLite cache will be created with the filename “analysis.sqlite” if it does not exist yet.
- Parameters:
baf2sql (ctypes.CDLL) – Library initialized by pyBaf2Sql.init_baf2sql.init_baf2sql_api().
bruker_d_folder_name (str) – Path to Bruker .d directory containing analysis.baf and analysis.sqlite.
- Returns:
SQLite filename.
- Return type:
str
- pyBaf2Sql.baf.get_sqlite_cache_filename_v2(baf2sql, bruker_d_folder_name, all_variables=False)¶
Find the filename of the SQLite cache corresponding to the specified BAF file. The SQLite cache will be created with the filename “analysis.sqlite” if it does not exist yet with the option to include all supported variables.
- Parameters:
baf2sql (ctypes.CDLL) – Library initialized by pyBaf2Sql.init_baf2sql.init_baf2sql_api().
bruker_d_folder_name (str) – Path to Bruker .d directory containing analysis.baf and analysis.sqlite.
all_variables (bool) – Whether to load all variables from analysis.sqlite database, defaults to False.
- Returns:
SQLite filename.
- Return type:
str
- pyBaf2Sql.baf.open_storage(baf2sql, bruker_d_folder_name, raw_calibration=False)¶
Open BAF dataset and return a non-zero instance handle to be passed to subsequent API calls.
- Parameters:
baf2sql (ctypes.CDLL) – Library initialized by pyBaf2Sql.init_baf2sql.init_baf2sql_api().
bruker_d_folder_name (str) – Path to Bruker .d directory containing analysis.baf and analysis.sqlite.
raw_calibration (bool) – Whether to use recalibrated data (False) or not (True), defaults to False.
- Returns:
Non-zero instance handle.
- Return type:
int
- pyBaf2Sql.baf.read_double(baf2sql, handle, identity)¶
Read array into a user provided buffer. The data will be converted to the requested type on the fly. The provided buffer must be large enough to hold the entire array.
- Parameters:
baf2sql (ctypes.CDLL) – Library initialized by pyBaf2Sql.init_baf2sql.init_baf2sql_api().
handle (int) – Handle value for BAF dataset initialized using pyBaf2Sql.baf.open_storage().
identity (str | int) – ID of the desired array.
- Returns:
Double array from the specified ID.
- Return type:
numpy.array
- pyBaf2Sql.baf.read_float(baf2sql, handle, identity)¶
Read array into a user provided buffer. The data will be converted to the requested type on the fly. The provided buffer must be large enough to hold the entire array.
- Parameters:
baf2sql (ctypes.CDLL) – Library initialized by pyBaf2Sql.init_baf2sql.init_baf2sql_api().
handle (int) – Handle value for BAF dataset initialized using pyBaf2Sql.baf.open_storage().
identity (str | int) – ID of the desired array.
- Returns:
Float array from the specified ID.
- Return type:
numpy.array
- pyBaf2Sql.baf.read_uint32(baf2sql, handle, identity)¶
Read array into a user provided buffer. The data will be converted to the requested type on the fly. The provided buffer must be large enough to hold the entire array.
- Parameters:
baf2sql (ctypes.CDLL) – Library initialized by pyBaf2Sql.init_baf2sql.init_baf2sql_api().
handle (int) – Handle value for BAF dataset initialized using pyBaf2Sql.baf.open_storage().
identity (str | int) – ID of the desired array.
- Returns:
uint32 array from the specified ID.
- Return type:
numpy.array
- pyBaf2Sql.baf.set_num_threads(baf2sql, num_threads)¶
Set the number of threads that this DLL is allowed to use internally. The index <-> m/z transformation is internally parallelized using OpenMP. This call is simply forwarded to omp_set_num_threads(). This function has no real effect on Linux.
- Parameters:
baf2sql (ctypes.CDLL) – Library initialized by pyBaf2Sql.init_baf2sql.init_baf2sql_api().
num_threads (int) – Number of threads to use (>= 1)
pyBaf2Sql.classes module¶
- class pyBaf2Sql.classes.BafData(bruker_d_folder_name: str, baf2sql, raw_calibration=False, all_variables=True, sql_chunksize=1000)¶
Bases:
objectClass containing metadata from BAF files and methods from Baf2Sql library to work with BAF format data.
- Parameters:
bruker_d_folder_name (str) – Path to Bruker .d directory containing analysis.baf and analysis.sqlite.
baf2sql (ctypes.CDLL) – Library initialized by pyBaf2Sql.init_baf2sql.init_baf2sql_api().
raw_calibration (bool) – Whether to use recalibrated data (False) or not (True), defaults to False.
all_variables (bool) – Whether to load all variables from analysis.sqlite database, defaults to False.
sql_chunksize (int) – Number of rows to read from SQL database query at once when reading tables/views from analysis.sqlite.
- close_sql_connection()¶
Close the connection to analysis.sqlite.
- get_db_tables(sql_chunksize=1000)¶
Get a dictionary of all tables found in the analysis.sqlite SQLite database in which the table names act as keys and the tables as a pandas.DataFrame of values; this is stored in pyBaf2Sql.classes.BafData.analysis.
- Parameters:
sql_chunksize (int) – Number of rows to read from SQL database query at once when reading tables/views from analysis.sqlite.
- class pyBaf2Sql.classes.BafSpectrum(baf_data, frame: int, mode: str, profile_bins=0, mz_encoding=64, intensity_encoding=64)¶
Bases:
objectClass for parsing and storing spectrum metadata and data arrays from BAF format data.
- Parameters:
baf_data (pyBaf2Sql.classes.BafData) – BafData object containing metadata from analysis.sqlite database.
frame (int) – ID of the frame of interest.
mode (str) – Data array mode, either “profile”, “centroid”, or “raw”.
profile_bins (int) – Number of bins to bin spectrum to.
mz_encoding (int) – m/z encoding command line parameter, either “64” or “32”.
intensity_encoding (int) – Intensity encoding command line parameter, either “64” or “32”.
- get_baf_data()¶
pyBaf2Sql.error module¶
- pyBaf2Sql.error.throw_last_baf2sql_error(baf2sql)¶
Error handling for Bruker raw data originating from BAF files. Modified from baf2sql.py example API.
- Parameters:
baf2sql (ctypes.CDLL) – Handle for Baf2sql library.
pyBaf2Sql.init_baf2sql module¶
- pyBaf2Sql.init_baf2sql.init_baf2sql_api(bruker_api_file_name='')¶
Initialize functions from Bruker’s Baf2Sql library using ctypes.
- Parameters:
bruker_api_file_name (str) – Path to Baf2Sql library, defaults to packaged library paths if no custom paths are provided.
- Returns:
Handle for Baf2Sql library.
- Return type:
ctypes.CDLL