dtale package

Submodules

dtale.app module

class dtale.app.DtaleFlask(import_name, reaper_on=True, url=None, app_root=None, *args, **kwargs)[source]

Bases: flask.app.Flask

Overriding Flask’s implementation of get_send_file_max_age, test_client & run

Parameters:
  • import_name – the name of the application package
  • reaper_on (bool) – whether to run auto-reaper subprocess
  • args – Optional arguments to be passed to flask.Flask
  • kwargs – Optional keyword arguments to be passed to flask.Flask
build_reaper(timeout=3600.0)[source]

Builds D-Tale’s auto-reaping process to cleanup process after an hour of inactivity

Parameters:timeout (float) – time in seconds before D-Tale is shutdown for inactivity, defaults to one hour
clear_reaper()[source]

Restarts auto-reaper countdown

get_send_file_max_age(name)[source]

Overriding Flask’s implementation of get_send_file_max_age so we can lower the timeout for javascript and css files which are changed more often

Parameters:name – filename
Returns:Flask’s default behavior for get_send_max_age if filename is not in SHORT_LIFE_PATHS otherwise SHORT_LIFE_TIMEOUT
run(*args, **kwargs)[source]
Parameters:
  • args – Optional arguments to be passed to flask.run
  • kwargs – Optional keyword arguments to be passed to flask.run
test_client(reaper_on=False, port=None, app_root=None, *args, **kwargs)[source]

Overriding Flask’s implementation of test_client so we can specify ports for testing and whether auto-reaper should be running

Parameters:
Returns:

Flask’s test client

Return type:

dtale.app.DtaleFlaskTesting

update_template_context(context)[source]
url_for(endpoint, *args, **kwargs)[source]
class dtale.app.DtaleFlaskTesting(*args, **kwargs)[source]

Bases: flask.testing.FlaskClient

Overriding Flask’s implementation of flask.FlaskClient so we can control the port associated with tests.

This class is required for setting the port on your test so that we won’t have SETTING keys colliding with other tests since the default for every test would be 80.

Parameters:
  • args – Optional arguments to be passed to flask.FlaskClient
  • kwargs – Optional keyword arguments to be passed to flask.FlaskClient
get(*args, **kwargs)[source]
Parameters:
  • args – Optional arguments to be passed to flask.FlaskClient.get
  • kwargs – Optional keyword arguments to be passed to flask.FlaskClient.get
dtale.app.build_app(url, host=None, reaper_on=True, app_root=None)[source]

Builds flask.Flask application encapsulating endpoints for D-Tale’s front-end

Returns:flask.Flask application
Return type:dtale.app.DtaleFlask
dtale.app.build_startup_url_and_app_root(app_root=None)[source]
dtale.app.find_free_port()[source]

Searches for free port on executing server to run the flask.Flask process. Checks ports in range specified using environment variables:

DTALE_MIN_PORT (default: 40000) DTALE_MAX_PORT (default: 49000)

The range limitation is required for usage in tools such as jupyterhub. Will raise an exception if an open port cannot be found.

Returns:port number
Return type:int
dtale.app.get_instance(data_id)[source]

Returns a dtale.views.DtaleData object for the data_id passed as input, will return None if the data_id does not exist

Parameters:data_id (str) – integer string identifier for a D-Tale process’s data
Returns:dtale.views.DtaleData
dtale.app.initialize_process_props(host=None, port=None, force=False)[source]

Helper function to initalize global state corresponding to the host & port being used for your flask.Flask process

Parameters:
Returns:

dtale.app.instances()[source]

Prints all urls to the current pieces of data being viewed

dtale.app.is_port_in_use(port)[source]
dtale.app.offline_chart(df, chart_type=None, query=None, x=None, y=None, z=None, group=None, agg=None, window=None, rolling_comp=None, barmode=None, barsort=None, yaxis=None, filepath=None, title=None, **kwargs)[source]

Builds the HTML for a plotly chart figure to saved to a file or output to a jupyter notebook

Parameters:
  • df (pandas.DataFrame) – integer string identifier for a D-Tale process’s data
  • chart_type (str) – type of chart, possible options are line|bar|pie|scatter|3d_scatter|surface|heatmap
  • query (str, optional) – pandas dataframe query string
  • x (str) – column to use for the X-Axis
  • y (list of str) – columns to use for the Y-Axes
  • z (str, optional) – column to use for the Z-Axis
  • group (list of str or str, optional) – column(s) to use for grouping
  • agg (str, optional) – specific aggregation that can be applied to y or z axes. Possible values are: count, first, last mean, median, min, max, std, var, mad, prod, sum. This is included in label of axis it is being applied to.
  • window (int, optional) – number of days to include in rolling aggregations
  • rolling_comp (str, optional) – computation to use in rolling aggregations
  • barmode (str, optional) – mode to use for bar chart display. possible values are stack|group(default)|overlay|relative
  • barsort (str, optional) – axis name to sort the bars in a bar chart by (default is the ‘x’, but other options are any of columns names used in the ‘y’ parameter
  • filepath (str, optional) – location to save HTML output
  • title (str, optional) – Title of your chart
  • kwargs (dict) – optional keyword arguments, here in case invalid arguments are passed to this function
Returns:

possible outcomes are: - if run within a jupyter notebook and no ‘filepath’ is specified it will print the resulting HTML

within a cell in your notebook

  • if ‘filepath’ is specified it will save the chart to the path specified
  • otherwise it will return the HTML output as a string

dtale.app.show(data=None, host=None, port=None, name=None, debug=False, subprocess=True, data_loader=None, reaper_on=True, open_browser=False, notebook=False, force=False, context_vars=None, ignore_duplicate=False, app_root=None, allow_cell_edits=True, inplace=False, drop_index=False, hide_shutdown=False, github_fork=False, **kwargs)[source]

Entry point for kicking off D-Tale flask.Flask process from python process

Parameters:
  • data (pandas.DataFrame or pandas.Series or pandas.DatetimeIndex or pandas.MultiIndex, optional) – data which D-Tale will display
  • host (str, optional) – hostname of D-Tale, defaults to 0.0.0.0
  • port (str, optional) – port number of D-Tale process, defaults to any open port on server
  • name (str, optional) – optional label to assign a D-Tale process
  • debug (bool, optional) – will turn on flask.Flask debug functionality, defaults to False
  • subprocess (bool, optional) – run D-Tale as a subprocess of your current process, defaults to True
  • data_loader (func, optional) – function to load your data
  • reaper_on (bool, optional) – turn on subprocess which will terminate D-Tale after 1 hour of inactivity
  • open_browser (bool, optional) – if true, this will try using the webbrowser package to automatically open your default browser to your D-Tale process
  • notebook (bool, optional) – if true, this will try displaying an IPython.display.IFrame
  • force (bool, optional) – if true, this will force the D-Tale instance to run on the specified host/port by killing any other process running at that location
  • context_vars (dict, optional) – a dictionary of the variables that will be available for use in user-defined expressions, such as filters
  • ignore_duplicate (bool, optional) – if true, this will not check if this data matches any other data previously loaded to D-Tale
  • app_root (str, optional) – Optional path to prepend to the routes of D-Tale. This is used when making use of Jupyterhub server proxy
  • allow_cell_edits (bool, optional) – If false, this will not allow users to edit cells directly in their D-Tale grid
  • inplace (bool, optional) – If true, this will call reset_index(inplace=True) on the dataframe used as a way to save memory. Otherwise this will create a brand new dataframe, thus doubling memory but leaving the dataframe input unchanged.
  • drop_index (bool, optional) – If true, this will drop any pre-existing index on the dataframe input.
  • hide_shutdown (bool, optional) – If true, this will hide the “Shutdown” buton from users
  • github_fork (bool, optional) – If true, this will display a “Fork me on GitHub” ribbon in the upper right-hand corner of the app
Example:
>>> import dtale
>>> import pandas as pd
>>> df = pandas.DataFrame([dict(a=1,b=2,c=3)])
>>> dtale.show(df)
D-Tale started at: http://hostname:port

..link displayed in logging can be copied and pasted into any browser

dtale.app.use_colab(port)[source]

dtale.column_builders module

class dtale.column_builders.BinsColumnBuilder(name, cfg)[source]

Bases: object

build_code()[source]
build_column(data)[source]
build_test(data)[source]
class dtale.column_builders.ColumnBuilder(data_id, column_type, name, cfg)[source]

Bases: object

build_code()[source]
build_column()[source]
class dtale.column_builders.DatetimeColumnBuilder(name, cfg)[source]

Bases: object

build_code()[source]
build_column(data)[source]
class dtale.column_builders.NumericColumnBuilder(name, cfg)[source]

Bases: object

build_code()[source]
build_column(data)[source]
class dtale.column_builders.RandomColumnBuilder(name, cfg)[source]

Bases: object

build_code()[source]
build_column(data)[source]
class dtale.column_builders.TransformColumnBuilder(name, cfg)[source]

Bases: object

build_code()[source]
build_column(data)[source]
class dtale.column_builders.TypeConversionColumnBuilder(name, cfg)[source]

Bases: object

build_code()[source]
build_column(data)[source]
build_inner_code()[source]
class dtale.column_builders.WinsorizeColumnBuilder(name, cfg)[source]

Bases: object

build_code()[source]
build_column(data)[source]
class dtale.column_builders.ZScoreNormalizeColumnBuilder(name, cfg)[source]

Bases: object

build_code()[source]
build_column(data)[source]
dtale.column_builders.id_generator(size=10, chars='ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789')[source]

dtale.column_filters module

class dtale.column_filters.ColumnFilter(data_id, column, cfg)[source]

Bases: object

save_filter()[source]
class dtale.column_filters.DateFilter(column, classification, cfg)[source]

Bases: dtale.column_filters.MissingFilter

build_filter()[source]
class dtale.column_filters.MissingFilter(column, classification, cfg)[source]

Bases: object

handle_missing(fltr)[source]
class dtale.column_filters.NumericFilter(column, classification, cfg)[source]

Bases: dtale.column_filters.MissingFilter

build_filter()[source]
class dtale.column_filters.OutlierFilter(column, classification, cfg)[source]

Bases: object

build_filter()[source]
class dtale.column_filters.StringFilter(column, classification, cfg)[source]

Bases: dtale.column_filters.MissingFilter

build_filter()[source]

dtale.column_replacements module

class dtale.column_replacements.ColumnReplacement(data_id, col, replacement_type, cfg, name=None)[source]

Bases: object

build_code()[source]
build_replacements()[source]
class dtale.column_replacements.ImputerReplacement(col, cfg, name)[source]

Bases: object

build_code(_data)[source]
build_column(data)[source]
class dtale.column_replacements.SpaceReplacement(col, cfg, name)[source]

Bases: object

build_code(data)[source]
build_column(data)[source]
class dtale.column_replacements.StringReplacement(col, cfg, name)[source]

Bases: object

build_code(data)[source]
build_column(data)[source]
parse_cfg()[source]
class dtale.column_replacements.ValueReplacement(col, cfg, name)[source]

Bases: object

build_code(data)[source]
build_column(data)[source]
dtale.column_replacements.get_inner_replacement_value(val)[source]
dtale.column_replacements.get_inner_replacement_value_as_str(val, series)[source]
dtale.column_replacements.get_replacement_value(cfg, prop)[source]
dtale.column_replacements.get_replacement_value_as_str(cfg, prop, series)[source]

dtale.data_reshapers module

class dtale.data_reshapers.AggregateBuilder(cfg)[source]

Bases: object

build_code()[source]
reshape(data)[source]
class dtale.data_reshapers.DataReshaper(data_id, shape_type, cfg)[source]

Bases: object

build_code()[source]
reshape()[source]
class dtale.data_reshapers.PivotBuilder(cfg)[source]

Bases: object

build_code()[source]
reshape(data)[source]
class dtale.data_reshapers.TransposeBuilder(cfg)[source]

Bases: object

build_code()[source]
reshape(data)[source]
dtale.data_reshapers.flatten_columns(df, columns=None)[source]

dtale.global_state module

dtale.global_state.cleanup(data_id=None)[source]

Helper function for cleanup up state related to a D-Tale process with a specific port

Parameters:port (str) – integer string for a D-Tale process’s port
dtale.global_state.convert_name_to_url_path(name)[source]
dtale.global_state.drop_punctuation(val)[source]
dtale.global_state.find_data_id(data_id_or_name)[source]
dtale.global_state.get_context_variables(data_id=None)[source]
dtale.global_state.get_data(data_id=None)[source]
dtale.global_state.get_dataset(data_id=None)[source]
dtale.global_state.get_dataset_dim(data_id=None)[source]
dtale.global_state.get_dtypes(data_id=None)[source]
dtale.global_state.get_history(data_id=None)[source]
dtale.global_state.get_metadata(data_id=None)[source]
dtale.global_state.get_settings(data_id=None)[source]
dtale.global_state.load_flag(data_id, flag_name, default)[source]
dtale.global_state.set_context_variables(data_id, val)[source]
dtale.global_state.set_data(data_id, val)[source]
dtale.global_state.set_dataset(data_id, val)[source]
dtale.global_state.set_dataset_dim(data_id, val)[source]
dtale.global_state.set_dtypes(data_id, val)[source]
dtale.global_state.set_history(data_id, val)[source]
dtale.global_state.set_metadata(data_id, val)[source]
dtale.global_state.set_settings(data_id, val)[source]
dtale.global_state.use_default_store()[source]

Use the default global data store, which is dictionaries in memory.

dtale.global_state.use_redis_store(directory, *args, **kwargs)[source]

Configure dtale to use redis for the global data store. Useful for web servers.

Parameters:
  • db_folder (str) – folder that db files will be stored in
  • args – All other arguments supported by the redislite.Redis() class
  • kwargs – All other keyword arguments supported by the redislite.Redis() class
Returns:

None

dtale.global_state.use_shelve_store(directory)[source]

Configure dtale to use python’s standard ‘shelve’ library for a persistent global data store.

Parameters:directory (str) – directory that the shelve db files will be stored in
Returns:None
dtale.global_state.use_store(store_class, create_store)[source]

Customize how dtale stores and retrieves global data. By default it uses global dictionaries, but this can be problematic if there are memory limitations or multiple python processes are running. Ex: a web server with multiple workers (processes) for processing requests.

Parameters:
  • store_class – Class providing an interface to the data store. To be valid, it must: 1. Implement get, clear, __setitem__, __delitem__, __iter__, __len__, __contains__. 2. Either be a subclass of MutableMapping or implement the ‘to_dict’ method.
  • create_store – Factory function for producing instances of <store_class>. Must take ‘name’ as the only parameter.
Returns:

None

dtale.utils module

exception dtale.utils.ChartBuildingError(error, details=None)[source]

Bases: Exception

Exception for signalling there was an issue constructing the data for your chart.

exception dtale.utils.DuplicateDataError(data_id)[source]

Bases: Exception

Exception for signalling that similar data is trying to be loaded to D-Tale again. Is this correct?

class dtale.utils.JSONFormatter(nan_display='')[source]

Bases: object

Class for formatting dictionaries and lists of dictionaries into JSON compliant data

Example:
>>> nan_display = 'nan'
>>> f = JSONFormatter(nan_display)
>>> f.add_int(1, 'a')
>>> f.add_float(2, 'b')
>>> f.add_string(3, 'c')
>>> jsonify(f.format_dicts([dict(a=1, b=2.0, c='c')]))
add_date(idx, name=None, fmt='%Y-%m-%d %H:%M:%S')[source]
add_float(idx, name=None, precision=6, as_string=False)[source]
add_int(idx, name=None, as_string=False)[source]
add_json(idx, name=None)[source]
add_string(idx, name=None)[source]
add_timestamp(idx, name=None)[source]
format_df(df)[source]
format_dict(lst)[source]
format_dicts(lsts)[source]
format_lists(df)[source]
dtale.utils.build_code_export(data_id, imports='import pandas as pd\n\n', query=None)[source]

Helper function for building a string representing the code that was run to get the data you are viewing to that point.

Parameters:
  • data_id (str) – integer string identifier for a D-Tale process’s data
  • imports (string, optional) – string representing the imports at the top of the code string
  • query (str, optional) – pandas dataframe query string
Returns:

python code string

dtale.utils.build_query(data_id, query=None)[source]
dtale.utils.build_shutdown_url(base)[source]

Builds the shutdown endpoint for the specified port

Parameters:port (str) – integer string for a D-Tale process’s port
Returns:URL string of the shutdown endpoint for the current server and port passed
dtale.utils.build_url(port, host)[source]

Returns full url combining host(if not specified will use the output of socket.gethostname()) & port

Parameters:
  • port (str) – integer string for the port to be used by the flask.Flask process
  • host (str, optional) – hostname, can start with ‘http://’, ‘https://’ or just the hostname itself
Returns:

str

dtale.utils.classify_type(type_name)[source]
Parameters:type_name – string label for value from pandas.DataFrame.dtypes()
Returns:shortened string label for dtype S = str B = bool F = float I = int D = timestamp or datetime TD = timedelta
Return type:str
dtale.utils.dict_merge(d1, d2, *args)[source]

Merges two dictionaries. Items of the second dictionary will replace items of the first dictionary if there are any overlaps. Either dictionary can be None. An empty dictionary {} will be returned if both dictionaries are None.

Parameters:
  • d1 (dict) – First dictionary can be None
  • d2 – Second dictionary can be None
Returns:

new dictionary with the contents of d2 overlaying the contents of d1

Return type:

dict

dtale.utils.divide_chunks(lst, n)[source]

Break list input ‘l’ up into smaller lists of size ‘n’

dtale.utils.export_to_csv_buffer(data, tsv=False)[source]
dtale.utils.find_dtype(s)[source]

Helper function to determine the dtype of a pandas.Series

dtale.utils.find_dtype_formatter(dtype, overrides=None)[source]
dtale.utils.find_selected_column(data, col)[source]

In case we come across a series which after reset_index() it’s columns are [date, security_id, values] in which case we want the last column

Parameters:
Returns:

column name if it exists within the dataframe’s columns, the last column within the dataframe otherwise

Return type:

str

dtale.utils.fix_url_path(path)[source]
dtale.utils.flatten_lists(lists)[source]
Take an iterable containing iterables and flatten them into one list.
  • [[1], [2], [3, 4]] => [1, 2, 3, 4]
dtale.utils.format_grid(df)[source]

Translate pandas.DataFrame to well-formed JSON. Structure is as follows: {

results: [
{col1: val1_row1,…,colN: valN_row1}, …, {col1: val1_rowN,…,colN: valN_rowN},

], columns: [

{name: col1, dtype: int}, …, {name: colN, dtype: float},

]

}

Parameters:df (pandas.DataFrame) – dataframe
Returns:JSON
dtale.utils.get_bool_arg(r, name)[source]

Retrieve argument from flask.request and convert to boolean

Parameters:
  • rflask.request
  • name – argument name
Type:

str

Returns:

True if lowercase value equals ‘true’, False otherwise

dtale.utils.get_dtypes(df)[source]

Build dictionary of column/dtype name pairs from pandas.DataFrame

dtale.utils.get_float_arg(r, name, default=None)[source]

Retrieve argument from flask.request and convert to float

Parameters:
  • rflask.request
  • name – argument name
  • default – default value if parameter is non-existent, defaults to None
Type:

str

Returns:

float argument value

dtale.utils.get_host(host=None)[source]

Returns host input if it exists otherwise the output of socket.gethostname()

Parameters:host (str, optional) – hostname, can start with ‘http://’, ‘https://’ or just the hostname itself
Returns:str
dtale.utils.get_int_arg(r, name, default=None)[source]

Retrieve argument from flask.request and convert to integer

Parameters:
  • rflask.request
  • name – argument name
  • default – default value if parameter is non-existent, defaults to None
Type:

str

Returns:

integer argument value

dtale.utils.get_json_arg(r, name, default=None)[source]

Retrieve argument from flask.request and parse JSON to python data structure

Parameters:
  • rflask.request
  • name – argument name
  • default – default value if parameter is non-existent, defaults to None
Type:

str

Returns:

parsed JSON

dtale.utils.get_str_arg(r, name, default=None)[source]

Retrieve argument from flask.request and convert to string

Parameters:
  • rflask.request
  • name – argument name
  • default – default value if parameter is non-existent, defaults to None
Type:

str

Returns:

string argument value

dtale.utils.grid_columns(df)[source]

Build list of {name, dtype} dictionaries for columns in pandas.DataFrame

dtale.utils.grid_formatter(col_types, nan_display='', overrides=None)[source]

Build dtale.utils.JSONFormatter from pandas.DataFrame

dtale.utils.handle_error(error_info)[source]

Boilerplate exception messaging

dtale.utils.inner_build_query(settings, query=None)[source]
dtale.utils.is_app_root_defined(app_root)[source]
dtale.utils.json_date(x, fmt='%Y-%m-%d %H:%M:%S', nan_display='', **kwargs)[source]

Convert value to date string to be used within JSON output

Parameters:
  • x – value to be converted to date string
  • fmt – the data string formatting to be applied
  • nan_display – if x is numpy.nan then return this value
Returns:

date string value

Return type:

str (YYYY-MM-DD)

dtale.utils.json_float(x, precision=2, nan_display='nan', inf_display='inf', as_string=False)[source]

Convert value to float to be used within JSON output

Parameters:
  • x – value to be converted to integer
  • precision – precision of float to be returned
  • nan_display – if x is numpy.nan then return this value
  • inf_display – if x is numpy.inf then return this value
  • as_string – return float as a formatted string (EX: 1,234.5643)
Returns:

float value

Return type:

float

dtale.utils.json_int(x, nan_display='', as_string=False, fmt='{:, d}')[source]

Convert value to integer to be used within JSON output

Parameters:
  • x – value to be converted to integer
  • nan_display – if x is numpy.nan then return this value
  • as_string – return integer as a formatted string (EX: 1,000,000)
Returns:

integer value

Return type:

int

dtale.utils.json_string(x, nan_display='', **kwargs)[source]

convert value to string to be used within JSON output

If a python.UnicodeEncodeError occurs then str.encode will be called on input

Parameters:
  • x – value to be converted to string
  • nan_display – if x is numpy.nan then return this value
Returns:

string value

Return type:

str

dtale.utils.json_timestamp(x, nan_display='', **kwargs)[source]

Convert value to timestamp (milliseconds) to be used within JSON output

Parameters:
  • x – value to be converted to milliseconds
  • nan_display – if x is numpy.nan then return this value
Returns:

millisecond value

Return type:

bigint

dtale.utils.jsonify(return_data={}, **kwargs)[source]

Overriding Flask’s jsonify method to account for extra error handling

Parameters:
  • return_data – dictionary of data to be passed to flask.jsonify
  • kwargs – Optional keyword arguments merged into return_data
Returns:

output of flask.jsonify

dtale.utils.jsonify_error(e)[source]
dtale.utils.make_list(vals)[source]

Convert a value that is optionally list or scalar into a list

dtale.utils.retrieve_grid_params(req, props=None)[source]

Pull out grid parameters from flask.request arguments and return as a dict

Parameters:
  • reqflask.request
  • props (list) – argument names
Returns:

dictionary of argument/value pairs

Return type:

dict

dtale.utils.run_query(df, query, context_vars=None, ignore_empty=False)[source]

Utility function for running pandas.DataFrame.query . This function contains extra logic to handle when column names contain special characters. Looks like pandas will be handling this in a future version: https://github.com/pandas-dev/pandas/issues/27017

The logic to handle these special characters in the meantime is only available in Python 3+

Parameters:
  • df (pandas.DataFrame) – input dataframe
  • query (str) – query string
  • context_vars (dict, optional) – dictionary of user-defined variables which can be referenced by name in query strings
Returns:

filtered dataframe

dtale.utils.running_with_flask_debug()[source]

Checks to see if D-Tale has been initiated from Flask

Returns:True if executed from test, False otherwise
Return type:bool
dtale.utils.running_with_pytest()[source]

Checks to see if D-Tale has been initiated from test

Returns:True if executed from test, False otherwise
Return type:bool
dtale.utils.sort_df_for_grid(df, params)[source]

Sort dataframe based on ‘sort’ property in parameter dictionary. Sort configuration is of the following shape: {

sort: [
[col1, ASC], [col2, DESC], …

]

}

Parameters:
Returns:

sorted dataframe

Return type:

pandas.DataFrame

dtale.views module

class dtale.views.DtaleData(data_id, url)[source]

Bases: object

Wrapper class to abstract the global state of a D-Tale process while allowing a user to programatically interact with a running D-Tale instance

Parameters:
  • data_id (str) – integer string identifier for a D-Tale process’s data
  • url (str) – endpoint for instances flask.Flask process
Attributes:
_data_id data identifier _url flask.Flask endpoint _notebook_handle reference to the most recent IPython.display.DisplayHandle created
Example:
>>> import dtale
>>> import pandas as pd
>>> df = pd.DataFrame([dict(a=1,b=2,c=3)])
>>> d = dtale.show(df)
>>> tmp = d.data.copy()
>>> tmp['d'] = 4
>>> d.data = tmp
>>> d.kill()
adjust_cell_dimensions(width='100%', height=350)[source]

If you are running ipython>=5.0 then this will update the most recent notebook cell you displayed D-Tale in for this instance with the height/width properties you have passed in as input

Parameters:
  • width – width of the ipython cell
  • height – height of the ipython cell
build_main_url(data_id=None)[source]
data

Property which is a reference to the globally stored data associated with this instance

is_up()[source]

Helper function to pass instance’s endpoint to dtale.views.is_up()

kill()[source]

Helper function to pass instance’s endpoint to dtale.views.kill()

main_url()[source]

Helper function creating main flask.Flask route using instance’s url & data_id :return: str

notebook(route='/dtale/iframe/', params=None, width='100%', height=475)[source]

Helper function which checks to see if flask.Flask process is up and running and then tries to build an IPython.display.IFrame and run IPython.display.display on it so it will be displayed in the ipython notebook which invoked it.

A reference to the IPython.display.DisplayHandle is stored in _notebook_handle for updating if you are running ipython>=5.0

Parameters:
  • route (str, optional) – the flask.Flask route to hit on D-Tale
  • params (dict, optional) – properties & values passed as query parameters to the route
  • width (str or int, optional) – width of the ipython cell
  • height (str or int, optional) – height of the ipython cell
notebook_charts(chart_type='line', query=None, x=None, y=None, z=None, group=None, agg=None, window=None, rolling_comp=None, barmode=None, barsort=None, width='100%', height=800)[source]

Helper function to build an ipython:IPython.display.IFrame pointing at the charts popup

Parameters:
  • chart_type (str) – type of chart, possible options are line|bar|pie|scatter|3d_scatter|surface|heatmap
  • query (str, optional) – pandas dataframe query string
  • x (str) – column to use for the X-Axis
  • y (list of str) – columns to use for the Y-Axes
  • z (str, optional) – column to use for the Z-Axis
  • group (list of str or str, optional) – column(s) to use for grouping
  • agg (str, optional) – specific aggregation that can be applied to y or z axes. Possible values are: count, first, last, mean, median, min, max, std, var, mad, prod, sum. This is included in label of axis it is being applied to.
  • window (int, optional) – number of days to include in rolling aggregations
  • rolling_comp (str, optional) – computation to use in rolling aggregations
  • barmode (str, optional) – mode to use for bar chart display. possible values are stack|group(default)|overlay|relative
  • barsort (str, optional) – axis name to sort the bars in a bar chart by (default is the ‘x’, but other options are any of columns names used in the ‘y’ parameter
  • width (str or int, optional) – width of the ipython cell
  • height (str or int, optional) – height of the ipython cell
Returns:

IPython.display.IFrame

notebook_correlations(col1, col2, width='100%', height=475)[source]

Helper function to build an ipython:IPython.display.IFrame pointing at the correlations popup

Parameters:
  • col1 (str) – column on left side of correlation
  • col2 (str) – column on right side of correlation
  • width (str or int, optional) – width of the ipython cell
  • height (str or int, optional) – height of the ipython cell
Returns:

IPython.display.IFrame

offline_chart(chart_type=None, query=None, x=None, y=None, z=None, group=None, agg=None, window=None, rolling_comp=None, barmode=None, barsort=None, yaxis=None, filepath=None, title=None, **kwargs)[source]

Builds the HTML for a plotly chart figure to saved to a file or output to a jupyter notebook

Parameters:
  • chart_type (str) – type of chart, possible options are line|bar|pie|scatter|3d_scatter|surface|heatmap
  • query (str, optional) – pandas dataframe query string
  • x (str) – column to use for the X-Axis
  • y (list of str) – columns to use for the Y-Axes
  • z (str, optional) – column to use for the Z-Axis
  • group (list of str or str, optional) – column(s) to use for grouping
  • agg (str, optional) – specific aggregation that can be applied to y or z axes. Possible values are: count, first, last, mean, median, min, max, std, var, mad, prod, sum. This is included in label of axis it is being applied to.
  • window (int, optional) – number of days to include in rolling aggregations
  • rolling_comp (str, optional) – computation to use in rolling aggregations
  • barmode (str, optional) – mode to use for bar chart display. possible values are stack|group(default)|overlay|relative
  • barsort (str, optional) – axis name to sort the bars in a bar chart by (default is the ‘x’, but other options are any of columns names used in the ‘y’ parameter
  • yaxis (dict, optional) – dictionary specifying the min/max for each y-axis in your chart
  • filepath (str, optional) – location to save HTML output
  • title (str, optional) – Title of your chart
  • kwargs (dict) – optional keyword arguments, here in case invalid arguments are passed to this function
Returns:

possible outcomes are: - if run within a jupyter notebook and no ‘filepath’ is specified it will print the resulting HTML

within a cell in your notebook

  • if ‘filepath’ is specified it will save the chart to the path specified
  • otherwise it will return the HTML output as a string

open_browser()[source]

This function uses the webbrowser library to try and automatically open server’s default browser to this D-Tale instance

dtale.views.base_render_template(template, data_id, **kwargs)[source]
Overriden version of Flask.render_template which will also include vital instance information
  • settings
  • version
  • processes
dtale.views.build_chart_filename(chart_type, ext='html')[source]
dtale.views.build_column(data_id)[source]

flask.Flask route to handle the building of new columns in a dataframe. Some of the operations the are available are:

  • numeric: sum/difference/multiply/divide any combination of two columns or static values
  • datetime: retrieving date properties (hour, minute, month, year…) or conversions of dates (month start, month
    end, quarter start…)
  • bins: bucketing numeric data into bins using pandas.cut & pandas.qcut
Parameters:
  • data_id (str) – integer string identifier for a D-Tale process’s data
  • name – string from flask.request.args[‘name’] of new column to create
  • type – string from flask.request.args[‘type’] of the type of column to build (numeric/datetime/bins)
  • cfg – dict from flask.request.args[‘cfg’] of how to calculate the new column
Returns:

JSON {success: True/False}

dtale.views.build_column_bins_tester(data_id)[source]
dtale.views.build_context_variables(data_id, new_context_vars=None)[source]

Build and return the dictionary of context variables associated with a process. If the names of any new variables are not formatted properly, an exception will be raised. New variables will overwrite the values of existing variables if they share the same name.

Parameters:
  • data_id (str) – integer string identifier for a D-Tale process’s data
  • new_context_vars (dict, optional) – dictionary of name, value pairs for new context variables
Returns:

dict of the context variables for this process

Return type:

dict

dtale.views.build_dtypes_state(data, prev_state=None)[source]

Helper function to build globally managed state pertaining to a D-Tale instances columns & data types

Parameters:data (pandas.DataFrame) – dataframe to build data type information for
Returns:a list of dictionaries containing column names, indexes and data types
dtale.views.build_filter_vals(series, data_id, column, fmt)[source]
dtale.views.build_replacement(data_id)[source]

flask.Flask route to handle the replacement of specific values within a column in a dataframe. Some of the operations the are available are:

  • spaces: replace values consisting of only spaces with a specific value
  • value: replace specific values with a specific value or aggregation
  • strings: replace values which contain a specific character or string (case-insensitive or not) with a
    specific value
  • imputer: replace nan values using sklearn imputers iterative, knn or simple
Parameters:
  • data_id (str) – integer string identifier for a D-Tale process’s data
  • col – string from flask.request.args[‘col’] of the column to perform replacements upon
  • type – string from flask.request.args[‘type’] of the type of replacement to perform (spaces/fillna/strings/imputer)
  • cfg – dict from flask.request.args[‘cfg’] of how to calculate the replacements
Returns:

JSON {success: True/False}

dtale.views.calc_outlier_range(s)[source]
dtale.views.chart_csv_export(data_id)[source]
dtale.views.chart_export(data_id)[source]
dtale.views.check_duplicate_data(data)[source]

This function will do a rough check to see if a user has already loaded this piece of data to D-Tale to avoid duplicated state. The checks that take place are:

  • shape (# of rows & # of columns
  • column names and ordering of columns (eventually might add dtype checking as well…)
Parameters:data (pandas.DataFrame) – dataframe to validate

:raises dtale.utils.DuplicateDataError: if duplicate data exists

dtale.views.convert_xarray_to_dataset(dataset, **indexers)[source]
dtale.views.data_export(data_id)[source]
dtale.views.delete_col(data_id, column)[source]
dtale.views.describe(data_id, column)[source]

flask.Flask route which returns standard details about column data using pandas.DataFrame.describe() to the front-end as JSON

Parameters:
  • data_id (str) – integer string identifier for a D-Tale process’s data
  • column – required dash separated string “START-END” stating a range of row indexes to be returned to the screen
Returns:

JSON { describe: object representing output from pandas.Series.describe(), unique_data: array of unique values when data has <= 100 unique values success: True/False

}

dtale.views.dtype_formatter(data, dtypes, data_ranges, prev_dtypes=None)[source]

Helper function to build formatter for the descriptive information about each column in the dataframe you are viewing in D-Tale. This data is later returned to the browser to help with controlling inputs to functions which are heavily tied to specific data types.

Parameters:
  • data (pandas.DataFrame) – dataframe
  • dtypes (dict) – column data type
  • data_ranges (dict, optional) – dictionary containing minimum and maximum value for column (if applicable)
  • prev_dtypes (dict, optional) – previous column information for syncing updates to pre-existing columns
Returns:

formatter function which takes column indexes and names

Return type:

func

dtale.views.dtypes(data_id)[source]

flask.Flask route which returns a list of column names and dtypes to the front-end as JSON

Parameters:data_id (str) – integer string identifier for a D-Tale process’s data
Returns:JSON { dtypes: [
{index: 1, name: col1, dtype: int64}, …, {index: N, name: colN, dtype: float64}

], success: True/False

}

dtale.views.edit_cell(data_id, column)[source]
dtale.views.exception_decorator(func)[source]
dtale.views.format_data(data, inplace=False, drop_index=False)[source]
Helper function to build globally managed state pertaining to a D-Tale instances data. Some updates being made:
  • convert all column names to strings
  • drop any indexes back into the dataframe so what we are left is a natural index [0,1,2,…,n]
  • convert inputs that are indexes into dataframes
  • replace any periods in column names with underscores
Parameters:
  • data (pandas.DataFrame) – dataframe to build data type information for
  • allow_cell_edits (bool, optional) – If false, this will not allow users to edit cells directly in their D-Tale grid
  • inplace (bool, optional) – If true, this will call reset_index(inplace=True) on the dataframe used as a way to save memory. Otherwise this will create a brand new dataframe, thus doubling memory but leaving the dataframe input unchanged.
  • drop_index (bool, optional) – If true, this will drop any pre-existing index on the dataframe input.
Returns:

formatted pandas.DataFrame and a list of strings constituting what columns were originally in the index

dtale.views.get_async_column_filter_data(data_id, column)[source]
dtale.views.get_chart_data(data_id)[source]

flask.Flask route which builds data associated with a chart.js chart

Parameters:
  • data_id (str) – integer string identifier for a D-Tale process’s data
  • query – string from flask.request.args[‘query’] which is applied to DATA using the query() function
  • x – string from flask.request.args[‘x’] column to be used as x-axis of chart
  • y – string from flask.request.args[‘y’] column to be used as y-axis of chart
  • group – string from flask.request.args[‘group’] comma-separated string of columns to group chart data by
  • agg – string from flask.request.args[‘agg’] points to a specific function that can be applied to :func: pandas.core.groupby.DataFrameGroupBy. Possible values are: count, first, last mean, median, min, max, std, var, mad, prod, sum
Returns:

JSON { data: {

series1: { x: [x1, x2, …, xN], y: [y1, y2, …, yN] }, series2: { x: [x1, x2, …, xN], y: [y1, y2, …, yN] }, …, seriesN: { x: [x1, x2, …, xN], y: [y1, y2, …, yN] },

}, min: minY, max: maxY,

} or {error: ‘Exception message’, traceback: ‘Exception stacktrace’}

dtale.views.get_code_export(data_id)[source]
dtale.views.get_column_analysis(data_id)[source]

flask.Flask route which returns output from numpy.histogram/pd.value_counts to front-end as JSON

Parameters:
  • data_id (str) – integer string identifier for a D-Tale process’s data
  • col – string from flask.request.args[‘col’] containing name of a column in your dataframe
  • type – string from flask.request.args[‘type’] to signify either a histogram or value counts
  • query – string from flask.request.args[‘query’] which is applied to DATA using the query() function
  • bins – the number of bins to display in your histogram, options on the front-end are 5, 10, 20, 50
  • top – the number of top values to display in your value counts, default is 100
Returns:

JSON {results: DATA, desc: output from pd.DataFrame[col].describe(), success: True/False}

dtale.views.get_column_filter_data(data_id, column)[source]
dtale.views.get_correlations(data_id)[source]

flask.Flask route which gathers Pearson correlations against all combinations of columns with numeric data using pandas.DataFrame.corr()

On large datasets with no numpy.nan data this code will use numpy.corrcoef for speed purposes

Parameters:
  • data_id (str) – integer string identifier for a D-Tale process’s data
  • query – string from flask.request.args[‘query’] which is applied to DATA using the query() function
Returns:

JSON { data: [{column: col1, col1: 1.0, col2: 0.99, colN: 0.45},…,{column: colN, col1: 0.34, col2: 0.88, colN: 1.0}],

} or {error: ‘Exception message’, traceback: ‘Exception stacktrace’}

dtale.views.get_correlations_ts(data_id)[source]

flask.Flask route which returns timeseries of Pearson correlations of two columns with numeric data using pandas.DataFrame.corr()

Parameters:
  • data_id (str) – integer string identifier for a D-Tale process’s data
  • cols – comma-separated string from flask.request.args[‘cols’] containing names of two columns in dataframe
  • dateCol – string from flask.request.args[‘dateCol’] with name of date-type column in dateframe for timeseries
Returns:

JSON { data: {:col1:col2: {data: [{corr: 0.99, date: ‘YYYY-MM-DD’},…], max: 0.99, min: 0.99}

} or {error: ‘Exception message’, traceback: ‘Exception stacktrace’}

dtale.views.get_data(data_id)[source]

flask.Flask route which returns current rows from DATA (based on scrollbar specs and saved settings) to front-end as JSON

Parameters:
  • data_id (str) – integer string identifier for a D-Tale process’s data
  • ids – required dash separated string “START-END” stating a range of row indexes to be returned to the screen
  • query – string from flask.request.args[‘query’] which is applied to DATA using the query() function
  • sort – JSON string from flask.request.args[‘sort’] which is applied to DATA using the sort_values() or sort_index() function. Here is the JSON structure: [col1,dir1],[col2,dir2],….[coln,dirn]
Returns:

JSON { results: [

{dtale_index: 1, col1: val1_1, …,colN: valN_1}, …, {dtale_index: N2, col1: val1_N2, …,colN: valN_N2}

], columns: [{name: col1, dtype: ‘int64’},…,{name: colN, dtype: ‘datetime’}], total: N2, success: True/False

}

dtale.views.get_dtype_info(data_id, col)[source]
dtale.views.get_filter_info(data_id)[source]

flask.Flask route which returns a view-only version of the query, column filters & context variables to the front end.

Parameters:data_id (str) – integer string identifier for a D-Tale process’s data
Returns:JSON
dtale.views.get_processes()[source]

flask.Flask route which returns list of running D-Tale processes within current python process

Returns:JSON { data: [
{
port: 1, name: ‘name1’, rows: 5, columns: 5, names: ‘col1,…,col5’, start: ‘2018-04-30 12:36:44’, ts: 1525106204000

}, …, {

port: N, name: ‘nameN’, rows: 5, columns: 5, names: ‘col1,…,col5’, start: ‘2018-04-30 12:36:44’, ts: 1525106204000

}

], success: True/False

}

dtale.views.get_scatter(data_id)[source]

flask.Flask route which returns data used in correlation of two columns for scatter chart

Parameters:
  • data_id (str) – integer string identifier for a D-Tale process’s data
  • cols – comma-separated string from flask.request.args[‘cols’] containing names of two columns in dataframe
  • dateCol – string from flask.request.args[‘dateCol’] with name of date-type column in dateframe for timeseries
  • date – string from flask.request.args[‘date’] date value in dateCol to filter dataframe to
Returns:

JSON { data: [{col1: 0.123, col2: 0.123, index: 1},…,{col1: 0.123, col2: 0.123, index: N}], stats: { stats: {

correlated: 50, only_in_s0: 1, only_in_s1: 2, pearson: 0.987, spearman: 0.879,

} x: col1, y: col2

} or {error: ‘Exception message’, traceback: ‘Exception stacktrace’}

dtale.views.get_xarray_coords(data_id)[source]
dtale.views.get_xarray_dimension_values(data_id, dim)[source]
dtale.views.handle_koalas(data)[source]

Helper function to check if koalas is installed and also if incoming data is a koalas dataframe, if so convert it to pandas.DataFrame, otherwise simply return the original data structure.

Parameters:data – data we want to check if its a koalas dataframe and if so convert to pandas.DataFrame
Returns:pandas.DataFrame
dtale.views.head_data_id()[source]
dtale.views.in_ipython_frontend()[source]

Helper function which is variation of pandas.io.formats.console.in_ipython_frontend which checks to see if we are inside an IPython zmq frontend

Returns:True if D-Tale is being invoked within ipython notebook, False otherwise
dtale.views.is_koalas(data)[source]
dtale.views.is_up(base)[source]

This function checks to see if instance’s flask.Flask process is up by hitting ‘health’ route.

Using verify=False will allow us to validate instances being served up over SSL

Returns:True if flask.Flask process is up and running, False otherwise
dtale.views.kill(base)[source]

This function fires a request to this instance’s ‘shutdown’ route to kill it

dtale.views.load_describe(column_series, additional_aggs=None)[source]

Helper function for grabbing the output from pandas.Series.describe() in a JSON serializable format

Parameters:column_series (pandas.Series) – data to describe
Returns:JSON serializable dictionary of the output from calling pandas.Series.describe()
dtale.views.outliers(data_id, column)[source]
dtale.views.refresh_col_indexes(data_id)[source]

Helper function to sync column indexes to current state of dataframe for data_id.

dtale.views.rename_col(data_id, column)[source]
dtale.views.reshape_data(data_id)[source]
dtale.views.run_cleanup(data_id)[source]
dtale.views.save_column_filter(data_id, column)[source]
dtale.views.send_file(output, filename, content_type)[source]
dtale.views.startup(url, data=None, data_loader=None, name=None, data_id=None, context_vars=None, ignore_duplicate=False, allow_cell_edits=True, inplace=False, drop_index=False, hide_shutdown=False, github_fork=False)[source]
Loads and stores data globally
  • If data has indexes then it will lock save those columns as locked on the front-end
  • If data has column named index it will be dropped so that it won’t collide with row numbering (dtale_index)
  • Create location in memory for storing settings which can be manipulated from the front-end (sorts, filter, …)
Parameters:
  • datapandas.DataFrame or pandas.Series
  • data_loader – function which returns pandas.DataFrame
  • name – string label to apply to your session
  • data_id – integer id assigned to a piece of data viewable in D-Tale, if this is populated then it will override the data at that id
  • context_vars (dict, optional) – a dictionary of the variables that will be available for use in user-defined expressions, such as filters
  • ignore_duplicate – if set to True this will not test whether this data matches any previously loaded to D-Tale
  • allow_cell_edits (bool, optional) – If false, this will not allow users to edit cells directly in their D-Tale grid
  • inplace (bool, optional) – If true, this will call reset_index(inplace=True) on the dataframe used as a way to save memory. Otherwise this will create a brand new dataframe, thus doubling memory but leaving the dataframe input unchanged.
  • drop_index (bool, optional) – If true, this will drop any pre-existing index on the dataframe input.
  • hide_shutdown (bool, optional) – If true, this will hide the “Shutdown” button from users
  • github_fork (bool, optional) – If true, this will display a “Fork Me On GitHub” ribbon in the upper right-hand corner of the app
dtale.views.test_filter(data_id)[source]

flask.Flask route which will test out pandas query before it gets applied to DATA and return exception information to the screen if there is any

Parameters:
  • data_id (str) – integer string identifier for a D-Tale process’s data
  • query – string from flask.request.args[‘query’] which is applied to DATA using the query() function
Returns:

JSON {success: True/False}

dtale.views.to_xarray(data_id)[source]
dtale.views.unique_count(s)[source]
dtale.views.update_column_position(data_id)[source]

flask.Flask route to handle moving of columns within a pandas.DataFrame. Columns can be moved in one of these 4 directions: front, back, left, right

Parameters:
  • data_id (str) – integer string identifier for a D-Tale process’s data
  • action – string from flask.request.args[‘action’] of direction to move column
  • col – string from flask.request.args[‘col’] of column name to move
Returns:

JSON {success: True/False}

dtale.views.update_formats(data_id)[source]

flask.Flask route which updates the “formats” property for global SETTINGS associated w/ the current port

Parameters:
  • data_id (str) – integer string identifier for a D-Tale process’s data
  • all – boolean flag which, if true, tells us we should apply this formatting to all columns with the same data type as our selected column
  • col – selected column
  • format – JSON string for the formatting configuration we want applied to either the selected column of all columns with the selected column’s data type
Returns:

JSON

dtale.views.update_locked(data_id)[source]

flask.Flask route to handle saving state associated with locking and unlocking columns

Parameters:
  • data_id (str) – integer string identifier for a D-Tale process’s data
  • action – string from flask.request.args[‘action’] of action to perform (lock or unlock)
  • col – string from flask.request.args[‘col’] of column name to lock/unlock
Returns:

JSON {success: True/False}

dtale.views.update_settings(data_id)[source]

flask.Flask route which updates global SETTINGS for current port

Parameters:
  • data_id (str) – integer string identifier for a D-Tale process’s data
  • settings – JSON string from flask.request.args[‘settings’] which gets decoded and stored in SETTINGS variable
Returns:

JSON

dtale.views.update_visibility(data_id)[source]

flask.Flask route to handle saving state associated visiblity of columns on the front-end

Parameters:
  • data_id (str) – integer string identifier for a D-Tale process’s data
  • visibility (dict, optional) – string from flask.request.args[‘action’] of dictionary of visibility of all columns in a dataframe
  • toggle (str, optional) – string from flask.request.args[‘col’] of column name whose visibility should be toggled
Returns:

JSON {success: True/False}

dtale.views.update_xarray_selection(data_id)[source]
dtale.views.upload()[source]
dtale.views.variance(data_id, column)[source]

flask.Flask route which returns standard details about column data using pandas.DataFrame.describe() to the front-end as JSON

Parameters:
  • data_id (str) – integer string identifier for a D-Tale process’s data
  • column – required dash separated string “START-END” stating a range of row indexes to be returned to the screen
Returns:

JSON { describe: object representing output from pandas.Series.describe(), unique_data: array of unique values when data has <= 100 unique values success: True/False

}

dtale.views.view_code_popup()[source]

flask.Flask route which serves up a base jinja template for code snippets

Returns:HTML
dtale.views.view_iframe(data_id=None)[source]

flask.Flask route which serves up base jinja template housing JS files

Parameters:data_id (str) – integer string identifier for a D-Tale process’s data
Returns:HTML
dtale.views.view_main(data_id=None)[source]

flask.Flask route which serves up base jinja template housing JS files

Parameters:data_id (str) – integer string identifier for a D-Tale process’s data
Returns:HTML
dtale.views.view_popup(popup_type, data_id=None)[source]

flask.Flask route which serves up a base jinja template for any popup, additionally forwards any request parameters as input to template.

Parameters:
  • popup_type (str) – type of popup to be opened. Possible values: charts, correlations, describe, histogram, instances
  • data_id (str) – integer string identifier for a D-Tale process’s data
Returns:

HTML

Module contents