我在使用
Dask.dask_ml.preprocessing.MinMaxScaler
对一个
dask.dataframe.core.DataFrame
进行规范化处理时遇到了问题,我可以使用
sklearn.preprocessing.MinMaxScaler
,但是我希望使用dask来扩大规模。
最小的、可重复的例子。
# Get data
ddf = dd.read_csv('test.csv') # See below
ddf = ddf.set_index('index')
# Pivot
ddf = ddf.categorize(columns=['item', 'name'])
ddf_p = ddf.pivot_table(index='item', columns='name', values='value', aggfunc='mean')
col = ddf_p.columns.to_list()
# sklearn verison
from sklearn.preprocessing import MinMaxScaler
scaler_s = MinMaxScaler()
scaled_ddf_s = scaler_s.fit_transform(ddf_p[col]) # Works!
# dask verison
from dask_ml.preprocessing import MinMaxScaler
scaler_d = MinMaxScaler()
scaled_values_d = scaler_d.fit_transform(ddf_p[col]) # Doesn't work
错误信息。
TypeError: Categorical is not ordered for operation min
you can use .as_ordered() to change the Categorical to an ordered one
不知道透视表中的'Categorical'是什么,但我已经尝试了.as_ordered()的索引。
from dask_ml.preprocessing import MinMaxScaler
scaler_d = MinMaxScaler()
ddf_p = ddf_p.index.cat.as_ordered()
scaled_values_d = scaler_d.fit_transform(ddf_p[col])
但我得到的是错误信息。
NotImplementedError: Series getitem in only supported for other series objects with matching partition structure
test.csv:
index,item,name,value
2015-01-01,item_1,A,1
2015-01-01,item_1,B,2
2015-01-01,item_1,C,3
2015-01-01,item_1,D,4
2015-01-01,item_1,E,5
2015-01-02,item_2,A,10
2015-01-02,item_2,B,20
2015-01-02,item_2,C,30
2015-01-02,item_2,D,40
2015-01-02,item_2,E,50