第2章 索引
import numpy as np
import pandas as pd
df = pd.read_csv('data/table.csv',index_col='ID')
df.head()
|
School
|
Class
|
Gender
|
Address
|
Height
|
Weight
|
Math
|
Physics
|
ID
|
|
|
|
|
|
|
|
|
1101
|
S_1
|
C_1
|
M
|
street_1
|
173
|
63
|
34.0
|
A+
|
1102
|
S_1
|
C_1
|
F
|
street_2
|
192
|
73
|
32.5
|
B+
|
1103
|
S_1
|
C_1
|
M
|
street_2
|
186
|
82
|
87.2
|
B+
|
1104
|
S_1
|
C_1
|
F
|
street_2
|
167
|
81
|
80.4
|
B-
|
1105
|
S_1
|
C_1
|
F
|
street_4
|
159
|
64
|
84.8
|
B+
|
一、单级索引
1. loc方法、iloc方法、[]操作符
最常用的索引方法可能就是这三类,其中iloc表示位置索引,loc表示标签索引,[]也具有很大的便利性,各有特点
(a)loc方法(注意:所有在loc中使用的切片全部包含右端点!)
① 单行索引:
df.loc[1103]
School S_1
Class C_1
Gender M
Address street_2
Height 186
Weight 82
Math 87.2
Physics B+
Name: 1103, dtype: object
② 多行索引:
df.loc[[1102,2304]]
|
School
|
Class
|
Gender
|
Address
|
Height
|
Weight
|
Math
|
Physics
|
ID
|
|
|
|
|
|
|
|
|
1102
|
S_1
|
C_1
|
F
|
street_2
|
192
|
73
|
32.5
|
B+
|
2304
|
S_2
|
C_3
|
F
|
street_6
|
164
|
81
|
95.5
|
A-
|
df.loc[1304:].head()
|
School
|
Class
|
Gender
|
Address
|
Height
|
Weight
|
Math
|
Physics
|
ID
|
|
|
|
|
|
|
|
|
1304
|
S_1
|
C_3
|
M
|
street_2
|
195
|
70
|
85.2
|
A
|
1305
|
S_1
|
C_3
|
F
|
street_5
|
187
|
69
|
61.7
|
B-
|
2101
|
S_2
|
C_1
|
M
|
street_7
|
174
|
84
|
83.3
|
C
|
2102
|
S_2
|
C_1
|
F
|
street_6
|
161
|
61
|
50.6
|
B+
|
2103
|
S_2
|
C_1
|
M
|
street_4
|
157
|
61
|
52.5
|
B-
|
df.loc[2402::-1].head()
|
School
|
Class
|
Gender
|
Address
|
Height
|
Weight
|
Math
|
Physics
|
ID
|
|
|
|
|
|
|
|
|
2402
|
S_2
|
C_4
|
M
|
street_7
|
166
|
82
|
48.7
|
B
|
2401
|
S_2
|
C_4
|
F
|
street_2
|
192
|
62
|
45.3
|
A
|
2305
|
S_2
|
C_3
|
M
|
street_4
|
187
|
73
|
48.9
|
B
|
2304
|
S_2
|
C_3
|
F
|
street_6
|
164
|
81
|
95.5
|
A-
|
2303
|
S_2
|
C_3
|
F
|
street_7
|
190
|
99
|
65.9
|
C
|
③ 单列索引:
df.loc[:,'Height'].head()
ID
1101 173
1102 192
1103 186
1104 167
1105 159
Name: Height, dtype: int64
④ 多列索引:
df.loc[:,['Height','Math']].head()
|
Height
|
Math
|
ID
|
|
|
1101
|
173
|
34.0
|
1102
|
192
|
32.5
|
1103
|
186
|
87.2
|
1104
|
167
|
80.4
|
1105
|
159
|
84.8
|
df.loc[:,'Height':'Math'].head() #Height到Math
|
Height
|
Weight
|
Math
|
ID
|
|
|
|
1101
|
173
|
63
|
34.0
|
1102
|
192
|
73
|
32.5
|
1103
|
186
|
82
|
87.2
|
1104
|
167
|
81
|
80.4
|
1105
|
159
|
64
|
84.8
|
⑤ 联合索引:
df.loc[1102:2401:3,'Height':'Math'].head()
|
Height
|
Weight
|
Math
|
ID
|
|
|
|
1102
|
192
|
73
|
32.5
|
1105
|
159
|
64
|
84.8
|
1203
|
160
|
53
|
58.8
|
1301
|
161
|
68
|
31.5
|
1304
|
195
|
70
|
85.2
|
⑥ 函数式索引:
df.loc[lambda x:x['Gender']=='M'].head()
#loc中使用的函数,传入参数就是前面的df
|
School
|
Class
|
Gender
|
Address
|
Height
|
Weight
|
Math
|
Physics
|
ID
|
|
|
|
|
|
|
|
|
1101
|
S_1
|
C_1
|
M
|
street_1
|
173
|
63
|
34.0
|
A+
|
1103
|
S_1
|
C_1
|
M
|
street_2
|
186
|
82
|
87.2
|
B+
|
1201
|
S_1
|
C_2
|
M
|
street_5
|
188
|
68
|
97.0
|
A-
|
1203
|
S_1
|
C_2
|
M
|
street_6
|
160
|
53
|
58.8
|
A+
|
1301
|
S_1
|
C_3
|
M
|
street_4
|
161
|
68
|
31.5
|
B+
|
def f(x):return [1101,1103,1201]
df.loc[f]
|
School
|
Class
|
Gender
|
Address
|
Height
|
Weight
|
Math
|
Physics
|
ID
|
|
|
|
|
|
|
|
|
1101
|
S_1
|
C_1
|
M
|
street_1
|
173
|
63
|
34.0
|
A+
|
1103
|
S_1
|
C_1
|
M
|
street_2
|
186
|
82
|
87.2
|
B+
|
1201
|
S_1
|
C_2
|
M
|
street_5
|
188
|
68
|
97.0
|
A-
|
⑦ 布尔索引(将重点在第2节介绍)
df.loc[df['Physics'].isin(['A+','A'])].head()
|
School
|
Class
|
Gender
|
Address
|
Height
|
Weight
|
Math
|
Physics
|
ID
|
|
|
|
|
|
|
|
|
1101
|
S_1
|
C_1
|
M
|
street_1
|
173
|
63
|
34.0
|
A+
|
1203
|
S_1
|
C_2
|
M
|
street_6
|
160
|
53
|
58.8
|
A+
|
1304
|
S_1
|
C_3
|
M
|
street_2
|
195
|
70
|
85.2
|
A
|
2105
|
S_2
|
C_1
|
M
|
street_4
|
170
|
81
|
34.2
|
A
|
2203
|
S_2
|
C_2
|
M
|
street_4
|
155
|
91
|
73.8
|
A+
|
df.loc[(df['Height']>170) & (df['Height']<180)].head() #身高在170~180
|
School
|
Class
|
Gender
|
Address
|
Height
|
Weight
|
Math
|
Physics
|
ID
|
|
|
|
|
|
|
|
|
1101
|
S_1
|
C_1
|
M
|
street_1
|
173
|
63
|
34.0
|
A+
|
1202
|
S_1
|
C_2
|
F
|
street_4
|
176
|
94
|
63.5
|
B-
|
1302
|
S_1
|
C_3
|
F
|
street_1
|
175
|
57
|
87.7
|
A-
|
2101
|
S_2
|
C_1
|
M
|
street_7
|
174
|
84
|
83.3
|
C
|
2204
|
S_2
|
C_2
|
M
|
street_1
|
175
|
74
|
47.2
|
B-
|
df.loc[[True if i[-1]=='4' or i[-1]=='7' else False for i in df['Address'].values]].head()
|
School
|
Class
|
Gender
|
Address
|
Height
|
Weight
|
Math
|
Physics
|
ID
|
|
|
|
|
|
|
|
|
1105
|
S_1
|
C_1
|
F
|
street_4
|
159
|
64
|
84.8
|
B+
|
1202
|
S_1
|
C_2
|
F
|
street_4
|
176
|
94
|
63.5
|
B-
|
1301
|
S_1
|
C_3
|
M
|
street_4
|
161
|
68
|
31.5
|
B+
|
1303
|
S_1
|
C_3
|
M
|
street_7
|
188
|
82
|
49.7
|
B
|
2101
|
S_2
|
C_1
|
M
|
street_7
|
174
|
84
|
83.3
|
C
|
小节:本质上说,loc中能传入的只有布尔列表和索引子集构成的列表,只要把握这个原则就很容易理解上面那些操作
(b)iloc方法(注意与loc不同,切片右端点不包含)
① 单行索引:
df.iloc[3]
School S_1
Class C_1
Gender F
Address street_2
Height 167
Weight 81
Math 80.4
Physics B-
Name: 1104, dtype: object
② 多行索引:
df.iloc[3:5]
|
School
|
Class
|
Gender
|
Address
|
Height
|
Weight
|
Math
|
Physics
|
ID
|
|
|
|
|
|
|
|
|
1104
|
S_1
|
C_1
|
F
|
street_2
|
167
|
81
|
80.4
|
B-
|
1105
|
S_1
|
C_1
|
F
|
street_4
|
159
|
64
|
84.8
|
B+
|
③ 单列索引:
df.iloc[:,0].head()
ID
1101 S_1
1102 S_1
1103 S_1
1104 S_1
1105 S_1
Name: School, dtype: object
④ 多列索引:
df.iloc[:,7::-2].head()
|
Physics
|
Weight
|
Address
|
Class
|
ID
|
|
|
|
|
1101
|
A+
|
63
|
street_1
|
C_1
|
1102
|
B+
|
73
|
street_2
|
C_1
|
1103
|
B+
|
82
|
street_2
|
C_1
|
1104
|
B-
|
81
|
street_2
|
C_1
|
1105
|
B+
|
64
|
street_4
|
C_1
|
⑤ 混合索引:
df.iloc[3::4,7::-2].head()
|
Physics
|
Weight
|
Address
|
Class
|
ID
|
|
|
|
|
1104
|
B-
|
81
|
street_2
|
C_1
|
1203
|
A+
|
53
|
street_6
|
C_2
|
1302
|
A-
|
57
|
street_1
|
C_3
|
2101
|
C
|
84
|
street_7
|
C_1
|
2105
|
A
|
81
|
street_4
|
C_1
|
⑥ 函数式索引:
df.iloc[lambda x:[3]]
|
School
|
Class
|
Gender
|
Address
|
Height
|
Weight
|
Math
|
Physics
|
ID
|
|
|
|
|
|
|
|
|
1104
|
S_1
|
C_1
|
F
|
street_2
|
167
|
81
|
80.4
|
B-
|
小节:由上所述,iloc中接收的参数只能为整数或整数列表,不能使用布尔索引
(c) []操作符
如果不想陷入困境,请不要在行索引为浮点时使用[]操作符,因为在Series中的浮点[]并不是进行位置比较,而是值比较,非常特殊
(c.1)Series的[]操作
① 单元素索引:
s = pd.Series(df['Math'],index=df.index)
s[1101]
#使用的是索引标签
34.0
② 多行索引:
s[0:4]
#使用的是绝对位置的整数切片,与元素无关,这里容易混淆
ID
1101 34.0
1102 32.5
1103 87.2
1104 80.4
Name: Math, dtype: float64
③ 函数式索引:
s[lambda x: x.index[16::-6]]
#注意使用lambda函数时,直接切片(如:s[lambda x: 16::-6])就报错,此时使用的不是绝对位置切片,而是元素切片,非常易错
ID
2102 50.6
1301 31.5
1105 84.8
Name: Math, dtype: float64
④ 布尔索引:
s[s>80]
ID
1103 87.2
1104 80.4
1105 84.8
1201 97.0
1302 87.7
1304 85.2
2101 83.3
2205 85.4
2304 95.5
Name: Math, dtype: float64
(c.2)DataFrame的[]操作
① 单行索引:
df[1:2]
#这里非常容易写成df['label'],会报错
#同Series使用了绝对位置切片
#如果想要获得某一个元素,可用如下get_loc方法:
|
School
|
Class
|
Gender
|
Address
|
Height
|
Weight
|
Math
|
Physics
|
ID
|
|
|
|
|
|
|
|
|
1102
|
S_1
|
C_1
|
F
|
street_2
|
192
|
73
|
32.5
|
B+
|
row = df.index.get_loc(1102) #row=1
df[row:row+1]
#df.loc[1102]
|
School
|
Class
|
Gender
|
Address
|
Height
|
Weight
|
Math
|
Physics
|
ID
|
|
|
|
|
|
|
|
|
1102
|
S_1
|
C_1
|
F
|
street_2
|
192
|
73
|
32.5
|
B+
|
② 多行索引:
#用切片,如果是选取指定的某几行,推荐使用loc,否则很可能报错
df[3:5]
#df.loc[[1104,1105]]
|
School
|
Class
|
Gender
|
Address
|
Height
|
Weight
|
Math
|
Physics
|
ID
|
|
|
|
|
|
|
|
|
1104
|
S_1
|
C_1
|
F
|
street_2
|
167
|
81
|
80.4
|
B-
|
1105
|
S_1
|
C_1
|
F
|
street_4
|
159
|
64
|
84.8
|
B+
|
③ 单列索引:
df['School'].head()
ID
1101 S_1
1102 S_1
1103 S_1
1104 S_1
1105 S_1
Name: School, dtype: object
④ 多列索引:
df[['School','Math']].head()
|
School
|
Math
|
ID
|
|
|
1101
|
S_1
|
34.0
|
1102
|
S_1
|
32.5
|
1103
|
S_1
|
87.2
|
1104
|
S_1
|
80.4
|
1105
|
S_1
|
84.8
|
⑤函数式索引:
df[lambda x:['Math','Physics']].head()
|
Math
|
Physics
|
ID
|
|
|
1101
|
34.0
|
A+
|
1102
|
32.5
|
B+
|
1103
|
87.2
|
B+
|
1104
|
80.4
|
B-
|
1105
|
84.8
|
B+
|
⑥ 布尔索引:
df[df['Gender']=='F'].head()
|
School
|
Class
|
Gender
|
Address
|
Height
|
Weight
|
Math
|
Physics
|
ID
|
|
|
|
|
|
|
|
|
1102
|
S_1
|
C_1
|
F
|
street_2
|
192
|
73
|
32.5
|
B+
|
1104
|
S_1
|
C_1
|
F
|
street_2
|
167
|
81
|
80.4
|
B-
|
1105
|
S_1
|
C_1
|
F
|
street_4
|
159
|
64
|
84.8
|
B+
|
1202
|
S_1
|
C_2
|
F
|
street_4
|
176
|
94
|
63.5
|
B-
|
1204
|
S_1
|
C_2
|
F
|
street_5
|
162
|
63
|
33.8
|
B
|
小节:一般来说,[]操作符常用于列选择或布尔选择,尽量避免行的选择
2. 布尔索引
(a)布尔符号:’&’,’|’,’~’:分别代表和and,或or,取反not
df[(df['Gender']=='F')&(df['Address']=='street_2')].head()
|
School
|
Class
|
Gender
|
Address
|
Height
|
Weight
|
Math
|
Physics
|
ID
|
|
|
|
|
|
|
|
|
1102
|
S_1
|
C_1
|
F
|
street_2
|
192
|
73
|
32.5
|
B+
|
1104
|
S_1
|
C_1
|
F
|
street_2
|
167
|
81
|
80.4
|
B-
|
2401
|
S_2
|
C_4
|
F
|
street_2
|
192
|
62
|
45.3
|
A
|
2404
|
S_2
|
C_4
|
F
|
street_2
|
160
|
84
|
67.7
|
B
|
df[(df['Math']>85)|(df['Address']=='street_7')].head()
|
School
|
Class
|
Gender
|
Address
|
Height
|
Weight
|
Math
|
Physics
|
ID
|
|
|
|
|
|
|
|
|
1103
|
S_1
|
C_1
|
M
|
street_2
|
186
|
82
|
87.2
|
B+
|
1201
|
S_1
|
C_2
|
M
|
street_5
|
188
|
68
|
97.0
|
A-
|
1302
|
S_1
|
C_3
|
F
|
street_1
|
175
|
57
|
87.7
|
A-
|
1303
|
S_1
|
C_3
|
M
|
street_7
|
188
|
82
|
49.7
|
B
|
1304
|
S_1
|
C_3
|
M
|
street_2
|
195
|
70
|
85.2
|
A
|
df[~((df['Math']>75)|(df['Address']=='street_1'))].head()
|
School
|
Class
|
Gender
|
Address
|
Height
|
Weight
|
Math
|
Physics
|
ID
|
|
|
|
|
|
|
|
|
1102
|
S_1
|
C_1
|
F
|
street_2
|
192
|
73
|
32.5
|
B+
|
1202
|
S_1
|
C_2
|
F
|
street_4
|
176
|
94
|
63.5
|
B-
|
1203
|
S_1
|
C_2
|
M
|
street_6
|
160
|
53
|
58.8
|
A+
|
1204
|
S_1
|
C_2
|
F
|
street_5
|
162
|
63
|
33.8
|
B
|
1205
|
S_1
|
C_2
|
F
|
street_6
|
167
|
63
|
68.4
|
B-
|
loc和[]中相应位置都能使用布尔列表选择:
df.loc[df['Math']>60,(df[:8]['Address']=='street_6').values].head()
#(df[:8]['Address']=='street_6').values #第8行为T,取出第8列
#如果不加values就会索引对齐发生错误,Pandas中的索引对齐是一个重要特征,很多时候非常使用
#但是若不加以留意,就会埋下隐患
#df[:8]
#df['Math']>60
|
Physics
|
ID
|
|
1103
|
B+
|
1104
|
B-
|
1105
|
B+
|
1201
|
A-
|
1202
|
B-
|
(b) isin方法
df[df['Address'].isin(['street_1','street_4'])&df['Physics'].isin(['A','A+'])]
|
School
|
Class
|
Gender
|
Address
|
Height
|
Weight
|
Math
|
Physics
|
ID
|
|
|
|
|
|
|
|
|
1101
|
S_1
|
C_1
|
M
|
street_1
|
173
|
63
|
34.0
|
A+
|
2105
|
S_2
|
C_1
|
M
|
street_4
|
170
|
81
|
34.2
|
A
|
2203
|
S_2
|
C_2
|
M
|
street_4
|
155
|
91
|
73.8
|
A+
|
#上面也可以用字典方式写:
df[df[['Address','Physics']].isin({'Address':['street_1','street_4'],'Physics':['A','A+']}).all(1)]
#all与&的思路是类似的,其中的1代表按照跨列方向判断是否全为True
|
School
|
Class
|
Gender
|
Address
|
Height
|
Weight
|
Math
|
Physics
|
ID
|
|
|
|
|
|
|
|
|
1101
|
S_1
|
C_1
|
M
|
street_1
|
173
|
63
|
34.0
|
A+
|
2105
|
S_2
|
C_1
|
M
|
street_4
|
170
|
81
|
34.2
|
A
|
2203
|
S_2
|
C_2
|
M
|
street_4
|
155
|
91
|
73.8
|
A+
|
3. 快速标量索引
当只需要取一个元素时,at和iat方法能够提供更快的实现:
display(df.at[1101,'School'])
display(df.loc[1101,'School'])
display(df.iat[0,0])
display(df.iloc[0,0])
#可尝试去掉注释对比时间
#%timeit df.at[1101,'School']
#%timeit df.loc[1101,'School']
#%timeit df.iat[0,0]
#%timeit df.iloc[0,0]
'S_1''S_1'
4. 区间索引
此处介绍并不是说只能在单级索引中使用区间索引,只是作为一种特殊类型的索引方式,在此处先行介绍
(a)利用interval_range方法
pd.interval_range(start=0,end=5,closed='both')
#closed参数可选'left':左闭右开 'right':左开右闭(默认) 'both':闭 'neither':开
IntervalIndex([[0, 1], [1, 2], [2, 3], [3, 4], [4, 5]],closed='both',dtype='interval[int64]')
pd.interval_range(start=0,periods=8,freq=5)
#periods参数控制区间个数,freq控制步长
IntervalIndex([(0, 5], (5, 10], (10, 15], (15, 20], (20, 25], (25, 30], (30, 35], (35, 40]],closed='right',dtype='interval[int64]')
(b)利用cut将数值列转为区间为元素的分类变量,例如统计数学成绩的区间情况:
math_interval = pd.cut(df['Math'],bins=[0,40,60,80,100])
#注意,如果没有类型转换,此时并不是区间类型,而是category类型
math_interval.head()
ID
1101 (0, 40]
1102 (0, 40]
1103 (80, 100]
1104 (80, 100]
1105 (80, 100]
Name: Math, dtype: category
Categories (4, interval[int64]): [(0, 40] < (40, 60] < (60, 80] < (80, 100]]
(c)区间索引的选取
df_i = df.join(math_interval,rsuffix='_interval')[['Math','Math_interval']].reset_index().set_index('Math_interval')
df_i.head() #相同列名所加字符串
|
ID
|
Math
|
Math_interval
|
|
|
(0, 40]
|
1101
|
34.0
|
(0, 40]
|
1102
|
32.5
|
(80, 100]
|
1103
|
87.2
|
(80, 100]
|
1104
|
80.4
|
(80, 100]
|
1105
|
84.8
|
df_i.loc[65].head()
#包含该值就会被选中
|
ID
|
Math
|
Math_interval
|
|
|
(60, 80]
|
1202
|
63.5
|
(60, 80]
|
1205
|
68.4
|
(60, 80]
|
1305
|
61.7
|
(60, 80]
|
2104
|
72.2
|
(60, 80]
|
2202
|
68.5
|
df_i.loc[[65,90]].head()
|
ID
|
Math
|
Math_interval
|
|
|
(60, 80]
|
1202
|
63.5
|
(60, 80]
|
1205
|
68.4
|
(60, 80]
|
1305
|
61.7
|
(60, 80]
|
2104
|
72.2
|
(60, 80]
|
2202
|
68.5
|
如果想要选取某个区间,先要把–分类变量–转为–区间变量–,再使用overlap方法:
#df_i.loc[pd.Interval(70,75)].head() 报错
df_i[df_i.index.astype('interval').overlaps(pd.Interval(70, 85))].head()
|
ID
|
Math
|
Math_interval
|
|
|
(80, 100]
|
1103
|
87.2
|
(80, 100]
|
1104
|
80.4
|
(80, 100]
|
1105
|
84.8
|
(80, 100]
|
1201
|
97.0
|
(60, 80]
|
1202
|
63.5
|
二、多级索引
1. 创建多级索引
(a)通过from_tuple或from_arrays
① 直接创建元组
tuples = [('A','a'),('A','b'),('B','a'),('B','b')] #元组构成列表
mul_index = pd.MultiIndex.from_tuples(tuples, names=('Upper', 'Lower'))
mul_index
MultiIndex([('A', 'a'),('A', 'b'),('B', 'a'),('B', 'b')],names=['Upper', 'Lower'])
pd.DataFrame({'Score':['perfect','good','fair','bad']},index=mul_index)
|
|
Score
|
Upper
|
Lower
|
|
A
|
a
|
perfect
|
b
|
good
|
B
|
a
|
fair
|
b
|
bad
|
list = [['A','a'],['A','b'],['B','a'],['B','b']]
mul_index = pd.MultiIndex.from_tuples(list, names=('Upper', 'Lower')) #内部自动转成元组
pd.DataFrame({'Score':['perfect','good','fair','bad']},index=mul_index)
|
|
Score
|
Upper
|
Lower
|
|
A
|
a
|
perfect
|
b
|
good
|
B
|
a
|
fair
|
b
|
bad
|
mul_index
#由此看出内部自动转成元组
MultiIndex([('A', 'a'),('A', 'b'),('B', 'a'),('B', 'b')],names=['Upper', 'Lower'])
② 利用zip创建元组
L1 = list('AABB')
L2 = list('abab')
tuples = list(zip(L1,L2))
mul_index = pd.MultiIndex.from_tuples(tuples, names=('Upper', 'Lower'))
pd.DataFrame({'Score':['perfect','good','fair','bad']},index=mul_index)
|
|
Score
|
Upper
|
Lower
|
|
A
|
a
|
perfect
|
b
|
good
|
B
|
a
|
fair
|
b
|
bad
|
③ 通过Array创建
array = np.array([['A','A','B','B'],['a','b','a','b']]) #list_like: [[一级索引],[二级索引]], 每一级长度都必须跟索引长度相同
mul_index = pd.MultiIndex.from_arrays(array, names=('Upper', 'Lower'))
pd.DataFrame({'Score':['perfect','good','fair','bad']},index=mul_index)
|
|
Score
|
Upper
|
Lower
|
|
A
|
a
|
perfect
|
b
|
good
|
B
|
a
|
fair
|
b
|
bad
|
(b)通过from_product
L1 = ['A','B'] #排列组合生成长度为 len(1)*len(2)*... 的索引
L2 = ['a','b']
mul_index =pd.MultiIndex.from_product([L1,L2],names=('Upper', 'Lower'))
#两两相乘\
pd.DataFrame({'Score':['perfect','good','fair','bad']},index=mul_index)
|
|
Score
|
Upper
|
Lower
|
|
A
|
a
|
perfect
|
b
|
good
|
B
|
a
|
fair
|
b
|
bad
|
(c)指定df中的列创建(set_index方法)
df_using_mul = df.set_index(['Class','Address'])
df_using_mul.head()
|
|
School
|
Gender
|
Height
|
Weight
|
Math
|
Physics
|
Class
|
Address
|
|
|
|
|
|
|
C_1
|
street_1
|
S_1
|
M
|
173
|
63
|
34.0
|
A+
|
street_2
|
S_1
|
F
|
192
|
73
|
32.5
|
B+
|
street_2
|
S_1
|
M
|
186
|
82
|
87.2
|
B+
|
street_2
|
S_1
|
F
|
167
|
81
|
80.4
|
B-
|
street_4
|
S_1
|
F
|
159
|
64
|
84.8
|
B+
|
2. 多层索引切片
df_using_mul.head()
|
|
School
|
Gender
|
Height
|
Weight
|
Math
|
Physics
|
Class
|
Address
|
|
|
|
|
|
|
C_1
|
street_1
|
S_1
|
M
|
173
|
63
|
34.0
|
A+
|
street_2
|
S_1
|
F
|
192
|
73
|
32.5
|
B+
|
street_2
|
S_1
|
M
|
186
|
82
|
87.2
|
B+
|
street_2
|
S_1
|
F
|
167
|
81
|
80.4
|
B-
|
street_4
|
S_1
|
F
|
159
|
64
|
84.8
|
B+
|
(a)一般切片
#df_using_mul.loc['C_2','street_5']
#当索引不排序时,单个索引会报出性能警告
#df_using_mul.index.is_lexsorted() false
#该函数检查是否排序
df_using_mul.sort_index().loc['C_2','street_5']
#df_using_mul.sort_index().index.is_lexsorted() true
#df_using_mul.sort_index()
|
|
School
|
Gender
|
Height
|
Weight
|
Math
|
Physics
|
Class
|
Address
|
|
|
|
|
|
|
C_2
|
street_5
|
S_1
|
M
|
188
|
68
|
97.0
|
A-
|
street_5
|
S_1
|
F
|
162
|
63
|
33.8
|
B
|
street_5
|
S_2
|
M
|
193
|
100
|
39.1
|
B
|
#df_using_mul.loc[('C_2','street_5'):] 报错
#当不排序时,不能使用多层切片
df_using_mul.sort_index().loc[('C_2','street_6'):('C_4','street_3')] #左闭右开
#注意此处由于使用了loc,因此仍然包含右端点
|
|
School
|
Gender
|
Height
|
Weight
|
Math
|
Physics
|
Class
|
Address
|
|
|
|
|
|
|
C_2
|
street_6
|
S_1
|
M
|
160
|
53
|
58.8
|
A+
|
street_6
|
S_1
|
F
|
167
|
63
|
68.4
|
B-
|
street_7
|
S_2
|
F
|
194
|
77
|
68.5
|
B+
|
street_7
|
S_2
|
F
|
183
|
76
|
85.4
|
B
|
C_3
|
street_1
|
S_1
|
F
|
175
|
57
|
87.7
|
A-
|
street_2
|
S_1
|
M
|
195
|
70
|
85.2
|
A
|
street_4
|
S_1
|
M
|
161
|
68
|
31.5
|
B+
|
street_4
|
S_2
|
F
|
157
|
78
|
72.3
|
B+
|
street_4
|
S_2
|
M
|
187
|
73
|
48.9
|
B
|
street_5
|
S_1
|
F
|
187
|
69
|
61.7
|
B-
|
street_5
|
S_2
|
M
|
171
|
88
|
32.7
|
A
|
street_6
|
S_2
|
F
|
164
|
81
|
95.5
|
A-
|
street_7
|
S_1
|
M
|
188
|
82
|
49.7
|
B
|
street_7
|
S_2
|
F
|
190
|
99
|
65.9
|
C
|
C_4
|
street_2
|
S_2
|
F
|
192
|
62
|
45.3
|
A
|
street_2
|
S_2
|
F
|
160
|
84
|
67.7
|
B
|
df_using_mul.sort_index().loc[('C_2','street_7'):'C_3'].head()
#非元组也是合法的,表示选中该层所有元素
|
|
School
|
Gender
|
Height
|
Weight
|
Math
|
Physics
|
Class
|
Address
|
|
|
|
|
|
|
C_2
|
street_7
|
S_2
|
F
|
194
|
77
|
68.5
|
B+
|
street_7
|
S_2
|
F
|
183
|
76
|
85.4
|
B
|
C_3
|
street_1
|
S_1
|
F
|
175
|
57
|
87.7
|
A-
|
street_2
|
S_1
|
M
|
195
|
70
|
85.2
|
A
|
street_4
|
S_1
|
M
|
161
|
68
|
31.5
|
B+
|
(b)第一类特殊情况:由元组构成列表
df_using_mul.sort_index().loc[[('C_2','street_7'),('C_3','street_2')]] #闭合
#表示选出某几个元素,精确到最内层索引
|
|
School
|
Gender
|
Height
|
Weight
|
Math
|
Physics
|
Class
|
Address
|
|
|
|
|
|
|
C_2
|
street_7
|
S_2
|
F
|
194
|
77
|
68.5
|
B+
|
street_7
|
S_2
|
F
|
183
|
76
|
85.4
|
B
|
C_3
|
street_2
|
S_1
|
M
|
195
|
70
|
85.2
|
A
|
(c)第二类特殊情况:由列表构成元组
df_using_mul.sort_index().loc[(['C_2','C_3'],['street_4','street_7']),:]
#选出第一层在‘C_2’和'C_3'中且第二层在'street_4'和'street_7'中的行
|
|
School
|
Gender
|
Height
|
Weight
|
Math
|
Physics
|
Class
|
Address
|
|
|
|
|
|
|
C_2
|
street_4
|
S_1
|
F
|
176
|
94
|
63.5
|
B-
|
street_4
|
S_2
|
M
|
155
|
91
|
73.8
|
A+
|
street_7
|
S_2
|
F
|
194
|
77
|
68.5
|
B+
|
street_7
|
S_2
|
F
|
183
|
76
|
85.4
|
B
|
C_3
|
street_4
|
S_1
|
M
|
161
|
68
|
31.5
|
B+
|
street_4
|
S_2
|
F
|
157
|
78
|
72.3
|
B+
|
street_4
|
S_2
|
M
|
187
|
73
|
48.9
|
B
|
street_7
|
S_1
|
M
|
188
|
82
|
49.7
|
B
|
street_7
|
S_2
|
F
|
190
|
99
|
65.9
|
C
|
3. 多层索引中的slice对象
L1,L2 = ['A','B','C'],['a','b','c']
mul_index1 = pd.MultiIndex.from_product([L1,L2],names=('Upper', 'Lower'))
L3,L4 = ['D','E','F'],['d','e','f']
mul_index2 = pd.MultiIndex.from_product([L3,L4],names=('Big', 'Small'))
df_s = pd.DataFrame(np.random.rand(9,9),index=mul_index1,columns=mul_index2)
df_s
|
Big
|
D
|
E
|
F
|
|
Small
|
d
|
e
|
f
|
d
|
e
|
f
|
d
|
e
|
f
|
Upper
|
Lower
|
|
|
|
|
|
|
|
|
|
A
|
a
|
0.055073
|
0.046398
|
0.433773
|
0.585803
|
0.758589
|
0.021143
|
0.388852
|
0.086923
|
0.249213
|
b
|
0.581040
|
0.619700
|
0.269257
|
0.498630
|
0.172987
|
0.373643
|
0.401451
|
0.608396
|
0.517261
|
c
|
0.734722
|
0.664146
|
0.715707
|
0.422658
|
0.702079
|
0.489320
|
0.987386
|
0.034874
|
0.952730
|
B
|
a
|
0.907978
|
0.703347
|
0.475559
|
0.005389
|
0.784927
|
0.072212
|
0.749511
|
0.398780
|
0.524044
|
b
|
0.690069
|
0.544365
|
0.132101
|
0.149513
|
0.153937
|
0.142433
|
0.873528
|
0.619124
|
0.815529
|
c
|
0.197430
|
0.976303
|
0.137348
|
0.981766
|
0.028390
|
0.479319
|
0.621560
|
0.818642
|
0.379542
|
C
|
a
|
0.491799
|
0.649872
|
0.669458
|
0.010002
|
0.980888
|
0.864160
|
0.143542
|
0.652107
|
0.224476
|
b
|
0.322752
|
0.668354
|
0.448504
|
0.812689
|
0.401167
|
0.022905
|
0.644584
|
0.475140
|
0.546644
|
c
|
0.735888
|
0.001076
|
0.644940
|
0.526345
|
0.733607
|
0.265210
|
0.667444
|
0.619716
|
0.774425
|
idx=pd.IndexSlice
索引Slice的使用非常灵活:
df_s.loc[idx['B':,df_s['D']['d']>0.3],idx[df_s.sum()>4]]
#df_s.sum()默认为对列求和,因此返回一个长度为9的数值列表
|
Big
|
D
|
E
|
F
|
|
Small
|
d
|
e
|
e
|
d
|
e
|
f
|
Upper
|
Lower
|
|
|
|
|
|
|
B
|
a
|
0.907978
|
0.703347
|
0.784927
|
0.749511
|
0.398780
|
0.524044
|
b
|
0.690069
|
0.544365
|
0.153937
|
0.873528
|
0.619124
|
0.815529
|
C
|
a
|
0.491799
|
0.649872
|
0.980888
|
0.143542
|
0.652107
|
0.224476
|
b
|
0.322752
|
0.668354
|
0.401167
|
0.644584
|
0.475140
|
0.546644
|
c
|
0.735888
|
0.001076
|
0.733607
|
0.667444
|
0.619716
|
0.774425
|
4. 索引层的交换
(a)swaplevel方法(两层交换)
df_using_mul.head()
|
|
School
|
Gender
|
Height
|
Weight
|
Math
|
Physics
|
Class
|
Address
|
|
|
|
|
|
|
C_1
|
street_1
|
S_1
|
M
|
173
|
63
|
34.0
|
A+
|
street_2
|
S_1
|
F
|
192
|
73
|
32.5
|
B+
|
street_2
|
S_1
|
M
|
186
|
82
|
87.2
|
B+
|
street_2
|
S_1
|
F
|
167
|
81
|
80.4
|
B-
|
street_4
|
S_1
|
F
|
159
|
64
|
84.8
|
B+
|
df_using_mul.swaplevel(i=1,j=0,axis=0).sort_index().head() #i,j为级别,axis=0表示行
|
|
School
|
Gender
|
Height
|
Weight
|
Math
|
Physics
|
Address
|
Class
|
|
|
|
|
|
|
street_1
|
C_1
|
S_1
|
M
|
173
|
63
|
34.0
|
A+
|
C_2
|
S_2
|
M
|
175
|
74
|
47.2
|
B-
|
C_3
|
S_1
|
F
|
175
|
57
|
87.7
|
A-
|
street_2
|
C_1
|
S_1
|
F
|
192
|
73
|
32.5
|
B+
|
C_1
|
S_1
|
M
|
186
|
82
|
87.2
|
B+
|
(b)reorder_levels方法(多层交换)
df_muls = df.set_index(['School','Class','Address'])
df_muls.head()
|
|
|
Gender
|
Height
|
Weight
|
Math
|
Physics
|
School
|
Class
|
Address
|
|
|
|
|
|
S_1
|
C_1
|
street_1
|
M
|
173
|
63
|
34.0
|
A+
|
street_2
|
F
|
192
|
73
|
32.5
|
B+
|
street_2
|
M
|
186
|
82
|
87.2
|
B+
|
street_2
|
F
|
167
|
81
|
80.4
|
B-
|
street_4
|
F
|
159
|
64
|
84.8
|
B+
|
df_muls.reorder_levels([2,0,1],axis=0).sort_index().head() #[2,0,1]表示原索引级别重新排序
|
|
|
Gender
|
Height
|
Weight
|
Math
|
Physics
|
Address
|
School
|
Class
|
|
|
|
|
|
street_1
|
S_1
|
C_1
|
M
|
173
|
63
|
34.0
|
A+
|
C_3
|
F
|
175
|
57
|
87.7
|
A-
|
S_2
|
C_2
|
M
|
175
|
74
|
47.2
|
B-
|
street_2
|
S_1
|
C_1
|
F
|
192
|
73
|
32.5
|
B+
|
C_1
|
M
|
186
|
82
|
87.2
|
B+
|
#如果索引有name,可以直接使用name
df_muls.reorder_levels(['Address','School','Class'],axis=0).sort_index().head()
|
|
|
Gender
|
Height
|
Weight
|
Math
|
Physics
|
Address
|
School
|
Class
|
|
|
|
|
|
street_1
|
S_1
|
C_1
|
M
|
173
|
63
|
34.0
|
A+
|
C_3
|
F
|
175
|
57
|
87.7
|
A-
|
S_2
|
C_2
|
M
|
175
|
74
|
47.2
|
B-
|
street_2
|
S_1
|
C_1
|
F
|
192
|
73
|
32.5
|
B+
|
C_1
|
M
|
186
|
82
|
87.2
|
B+
|
三、索引设定
1. index_col参数
index_col是read_csv中的一个参数,而不是某一个方法:
pd.read_csv('data/table.csv',index_col=['Address','School']) #读文件是便设定索引
|
|
Class
|
ID
|
Gender
|
Height
|
Weight
|
Math
|
Physics
|
Address
|
School
|
|
|
|
|
|
|
|
street_1
|
S_1
|
C_1
|
1101
|
M
|
173
|
63
|
34.0
|
A+
|
street_2
|
S_1
|
C_1
|
1102
|
F
|
192
|
73
|
32.5
|
B+
|
S_1
|
C_1
|
1103
|
M
|
186
|
82
|
87.2
|
B+
|
S_1
|
C_1
|
1104
|
F
|
167
|
81
|
80.4
|
B-
|
street_4
|
S_1
|
C_1
|
1105
|
F
|
159
|
64
|
84.8
|
B+
|
street_5
|
S_1
|
C_2
|
1201
|
M
|
188
|
68
|
97.0
|
A-
|
street_4
|
S_1
|
C_2
|
1202
|
F
|
176
|
94
|
63.5
|
B-
|
street_6
|
S_1
|
C_2
|
1203
|
M
|
160
|
53
|
58.8
|
A+
|
street_5
|
S_1
|
C_2
|
1204
|
F
|
162
|
63
|
33.8
|
B
|
street_6
|
S_1
|
C_2
|
1205
|
F
|
167
|
63
|
68.4
|
B-
|
street_4
|
S_1
|
C_3
|
1301
|
M
|
161
|
68
|
31.5
|
B+
|
street_1
|
S_1
|
C_3
|
1302
|
F
|
175
|
57
|
87.7
|
A-
|
street_7
|
S_1
|
C_3
|
1303
|
M
|
188
|
82
|
49.7
|
B
|
street_2
|
S_1
|
C_3
|
1304
|
M
|
195
|
70
|
85.2
|
A
|
street_5
|
S_1
|
C_3
|
1305
|
F
|
187
|
69
|
61.7
|
B-
|
street_7
|
S_2
|
C_1
|
2101
|
M
|
174
|
84
|
83.3
|
C
|
street_6
|
S_2
|
C_1
|
2102
|
F
|
161
|
61
|
50.6
|
B+
|
street_4
|
S_2
|
C_1
|
2103
|
M
|
157
|
61
|
52.5
|
B-
|
street_5
|
S_2
|
C_1
|
2104
|
F
|
159
|
97
|
72.2
|
B+
|
street_4
|
S_2
|
C_1
|
2105
|
M
|
170
|
81
|
34.2
|
A
|
street_5
|
S_2
|
C_2
|
2201
|
M
|
193
|
100
|
39.1
|
B
|
street_7
|
S_2
|
C_2
|
2202
|
F
|
194
|
77
|
68.5
|
B+
|
street_4
|
S_2
|
C_2
|
2203
|
M
|
155
|
91
|
73.8
|
A+
|
street_1
|
S_2
|
C_2
|
2204
|
M
|
175
|
74
|
47.2
|
B-
|
street_7
|
S_2
|
C_2
|
2205
|
F
|
183
|
76
|
85.4
|
B
|
street_4
|
S_2
|
C_3
|
2301
|
F
|
157
|
78
|
72.3
|
B+
|
street_5
|
S_2
|
C_3
|
2302
|
M
|
171
|
88
|
32.7
|
A
|
street_7
|
S_2
|
C_3
|
2303
|
F
|
190
|
99
|
65.9
|
C
|
street_6
|
S_2
|
C_3
|
2304
|
F
|
164
|
81
|
95.5
|
A-
|
street_4
|
S_2
|
C_3
|
2305
|
M
|
187
|
73
|
48.9
|
B
|
street_2
|
S_2
|
C_4
|
2401
|
F
|
192
|
62
|
45.3
|
A
|
street_7
|
S_2
|
C_4
|
2402
|
M
|
166
|
82
|
48.7
|
B
|
street_6
|
S_2
|
C_4
|
2403
|
F
|
158
|
60
|
59.7
|
B+
|
street_2
|
S_2
|
C_4
|
2404
|
F
|
160
|
84
|
67.7
|
B
|
street_6
|
S_2
|
C_4
|
2405
|
F
|
193
|
54
|
47.6
|
B
|
2. reindex和reindex_like
reindex是指重新索引,它的重要特性在于索引对齐,很多时候用于重新排序
df.head(7)
|
School
|
Class
|
Gender
|
Address
|
Height
|
Weight
|
Math
|
Physics
|
ID
|
|
|
|
|
|
|
|
|
1101
|
S_1
|
C_1
|
M
|
street_1
|
173
|
63
|
34.0
|
A+
|
1102
|
S_1
|
C_1
|
F
|
street_2
|
192
|
73
|
32.5
|
B+
|
1103
|
S_1
|
C_1
|
M
|
street_2
|
186
|
82
|
87.2
|
B+
|
1104
|
S_1
|
C_1
|
F
|
street_2
|
167
|
81
|
80.4
|
B-
|
1105
|
S_1
|
C_1
|
F
|
street_4
|
159
|
64
|
84.8
|
B+
|
1201
|
S_1
|
C_2
|
M
|
street_5
|
188
|
68
|
97.0
|
A-
|
1202
|
S_1
|
C_2
|
F
|
street_4
|
176
|
94
|
63.5
|
B-
|
df.reindex(index=[1101,1203,1206,2402,6778])
|
School
|
Class
|
Gender
|
Address
|
Height
|
Weight
|
Math
|
Physics
|
ID
|
|
|
|
|
|
|
|
|
1101
|
S_1
|
C_1
|
M
|
street_1
|
173.0
|
63.0
|
34.0
|
A+
|
1203
|
S_1
|
C_2
|
M
|
street_6
|
160.0
|
53.0
|
58.8
|
A+
|
1206
|
NaN
|
NaN
|
NaN
|
NaN
|
NaN
|
NaN
|
NaN
|
NaN
|
2402
|
S_2
|
C_4
|
M
|
street_7
|
166.0
|
82.0
|
48.7
|
B
|
6778
|
NaN
|
NaN
|
NaN
|
NaN
|
NaN
|
NaN
|
NaN
|
NaN
|
df.reindex(columns=['Height','Gender','Average']).head()
|
Height
|
Gender
|
Average
|
ID
|
|
|
|
1101
|
173
|
M
|
NaN
|
1102
|
192
|
F
|
NaN
|
1103
|
186
|
M
|
NaN
|
1104
|
167
|
F
|
NaN
|
1105
|
159
|
F
|
NaN
|
可以选择缺失值的填充方法:fill_value和method(bfill:向后填充 / ffill:向前填充/ nearest:最近),其中method参数必须索引单调
df.reindex(index=[1101,1203,1206,2402],method='bfill')
#bfill表示用所在索引1206的后一个有效行填充,ffill为前一个有效行,nearest是指最近的
|
School
|
Class
|
Gender
|
Address
|
Height
|
Weight
|
Math
|
Physics
|
ID
|
|
|
|
|
|
|
|
|
1101
|
S_1
|
C_1
|
M
|
street_1
|
173
|
63
|
34.0
|
A+
|
1203
|
S_1
|
C_2
|
M
|
street_6
|
160
|
53
|
58.8
|
A+
|
1206
|
S_1
|
C_3
|
M
|
street_4
|
161
|
68
|
31.5
|
B+
|
2402
|
S_2
|
C_4
|
M
|
street_7
|
166
|
82
|
48.7
|
B
|
df.reindex(index=[1101,1203,1206,2402],method='nearest')
#数值上1205比1301更接近1206,因此用前者填充
|
School
|
Class
|
Gender
|
Address
|
Height
|
Weight
|
Math
|
Physics
|
ID
|
|
|
|
|
|
|
|
|
1101
|
S_1
|
C_1
|
M
|
street_1
|
173
|
63
|
34.0
|
A+
|
1203
|
S_1
|
C_2
|
M
|
street_6
|
160
|
53
|
58.8
|
A+
|
1206
|
S_1
|
C_2
|
F
|
street_6
|
167
|
63
|
68.4
|
B-
|
2402
|
S_2
|
C_4
|
M
|
street_7
|
166
|
82
|
48.7
|
B
|
reindex_like的作用为生成一个横纵索引完全与参数列表一致的DataFrame,数据使用被调用的表
df_temp = pd.DataFrame({'Weight':np.zeros(5),'Height':np.zeros(5),'ID':[1101,1104,1103,1106,1102]}).set_index('ID') #1105对应为NaN
df_temp.reindex_like(df[0:5][['Weight','Height']]) #df[0:5] df前五索引
|
Weight
|
Height
|
ID
|
|
|
1101
|
0.0
|
0.0
|
1102
|
0.0
|
0.0
|
1103
|
0.0
|
0.0
|
1104
|
0.0
|
0.0
|
1105
|
NaN
|
NaN
|
如果df_temp单调还可以使用method参数:
df_temp = pd.DataFrame({'Weight':range(5),'Height':range(5),'ID':[1101,1104,1103,1106,1102]}).set_index('ID').sort_index()
df_temp.reindex_like(df[0:5][['Weight','Height']],method='bfill') #用后一个有效行填充,即1106,对应33
#df_temp.reindex_like(df[0:5][['Weight','Height']],method='ffill') #用前一个有效行填充,即1104,对应11
#可以自行检验这里的1105的值是否是由bfill规则填充
|
Weight
|
Height
|
ID
|
|
|
1101
|
0
|
0
|
1102
|
4
|
4
|
1103
|
2
|
2
|
1104
|
1
|
1
|
1105
|
3
|
3
|
3. set_index和reset_index
先介绍set_index:从字面意思看,就是将某些列作为索引
使用表内列作为索引:
df.head()
|
School
|
Class
|
Gender
|
Address
|
Height
|
Weight
|
Math
|
Physics
|
ID
|
|
|
|
|
|
|
|
|
1101
|
S_1
|
C_1
|
M
|
street_1
|
173
|
63
|
34.0
|
A+
|
1102
|
S_1
|
C_1
|
F
|
street_2
|
192
|
73
|
32.5
|
B+
|
1103
|
S_1
|
C_1
|
M
|
street_2
|
186
|
82
|
87.2
|
B+
|
1104
|
S_1
|
C_1
|
F
|
street_2
|
167
|
81
|
80.4
|
B-
|
1105
|
S_1
|
C_1
|
F
|
street_4
|
159
|
64
|
84.8
|
B+
|
df.set_index('Class').head()
|
School
|
Gender
|
Address
|
Height
|
Weight
|
Math
|
Physics
|
Class
|
|
|
|
|
|
|
|
C_1
|
S_1
|
M
|
street_1
|
173
|
63
|
34.0
|
A+
|
C_1
|
S_1
|
F
|
street_2
|
192
|
73
|
32.5
|
B+
|
C_1
|
S_1
|
M
|
street_2
|
186
|
82
|
87.2
|
B+
|
C_1
|
S_1
|
F
|
street_2
|
167
|
81
|
80.4
|
B-
|
C_1
|
S_1
|
F
|
street_4
|
159
|
64
|
84.8
|
B+
|
利用append参数可以将当前索引维持不变
df.set_index('Class',append=True).head()
|
|
School
|
Gender
|
Address
|
Height
|
Weight
|
Math
|
Physics
|
ID
|
Class
|
|
|
|
|
|
|
|
1101
|
C_1
|
S_1
|
M
|
street_1
|
173
|
63
|
34.0
|
A+
|
1102
|
C_1
|
S_1
|
F
|
street_2
|
192
|
73
|
32.5
|
B+
|
1103
|
C_1
|
S_1
|
M
|
street_2
|
186
|
82
|
87.2
|
B+
|
1104
|
C_1
|
S_1
|
F
|
street_2
|
167
|
81
|
80.4
|
B-
|
1105
|
C_1
|
S_1
|
F
|
street_4
|
159
|
64
|
84.8
|
B+
|
当使用与表长相同的列作为索引(需要先转化为Series,否则报错):
df.set_index(pd.Series(range(df.shape[0]))).head()
|
School
|
Class
|
Gender
|
Address
|
Height
|
Weight
|
Math
|
Physics
|
0
|
S_1
|
C_1
|
M
|
street_1
|
173
|
63
|
34.0
|
A+
|
1
|
S_1
|
C_1
|
F
|
street_2
|
192
|
73
|
32.5
|
B+
|
2
|
S_1
|
C_1
|
M
|
street_2
|
186
|
82
|
87.2
|
B+
|
3
|
S_1
|
C_1
|
F
|
street_2
|
167
|
81
|
80.4
|
B-
|
4
|
S_1
|
C_1
|
F
|
street_4
|
159
|
64
|
84.8
|
B+
|
可以直接添加多级索引:
df.set_index([pd.Series(range(df.shape[0])),pd.Series(np.ones(df.shape[0]))]).head()
|
|
School
|
Class
|
Gender
|
Address
|
Height
|
Weight
|
Math
|
Physics
|
0
|
1.0
|
S_1
|
C_1
|
M
|
street_1
|
173
|
63
|
34.0
|
A+
|
1
|
1.0
|
S_1
|
C_1
|
F
|
street_2
|
192
|
73
|
32.5
|
B+
|
2
|
1.0
|
S_1
|
C_1
|
M
|
street_2
|
186
|
82
|
87.2
|
B+
|
3
|
1.0
|
S_1
|
C_1
|
F
|
street_2
|
167
|
81
|
80.4
|
B-
|
4
|
1.0
|
S_1
|
C_1
|
F
|
street_4
|
159
|
64
|
84.8
|
B+
|
下面介绍reset_index方法,它的主要功能是将索引重置
默认状态直接恢复到自然数索引:
df.reset_index().head() #索引变为自然数列
|
ID
|
School
|
Class
|
Gender
|
Address
|
Height
|
Weight
|
Math
|
Physics
|
0
|
1101
|
S_1
|
C_1
|
M
|
street_1
|
173
|
63
|
34.0
|
A+
|
1
|
1102
|
S_1
|
C_1
|
F
|
street_2
|
192
|
73
|
32.5
|
B+
|
2
|
1103
|
S_1
|
C_1
|
M
|
street_2
|
186
|
82
|
87.2
|
B+
|
3
|
1104
|
S_1
|
C_1
|
F
|
street_2
|
167
|
81
|
80.4
|
B-
|
4
|
1105
|
S_1
|
C_1
|
F
|
street_4
|
159
|
64
|
84.8
|
B+
|
用level参数指定哪一层被reset,用col_level参数指定set到哪一层:
L1,L2 = ['A','B','C'],['a','b','c']
mul_index1 = pd.MultiIndex.from_product([L1,L2],names=('Upper', 'Lower'))
L3,L4 = ['D','E','F'],['d','e','f']
mul_index2 = pd.MultiIndex.from_product([L3,L4],names=('Big', 'Small'))
df_temp = pd.DataFrame(np.random.rand(9,9),index=mul_index1,columns=mul_index2)
df_temp
|
Big
|
D
|
E
|
F
|
|
Small
|
d
|
e
|
f
|
d
|
e
|
f
|
d
|
e
|
f
|
Upper
|
Lower
|
|
|
|
|
|
|
|
|
|
A
|
a
|
0.793149
|
0.741857
|
0.564179
|
0.709766
|
0.475654
|
0.078749
|
0.005152
|
0.530341
|
0.772194
|
b
|
0.555399
|
0.350251
|
0.602172
|
0.238758
|
0.498534
|
0.420549
|
0.540857
|
0.528240
|
0.247767
|
c
|
0.241440
|
0.840120
|
0.418478
|
0.953688
|
0.708561
|
0.152443
|
0.649509
|
0.861890
|
0.372687
|
B
|
a
|
0.010432
|
0.650559
|
0.813984
|
0.212479
|
0.789201
|
0.744064
|
0.539185
|
0.710612
|
0.361783
|
b
|
0.012562
|
0.032409
|
0.451925
|
0.155730
|
0.722682
|
0.155294
|
0.192574
|
0.669353
|
0.615208
|
c
|
0.835537
|
0.353932
|
0.136030
|
0.640238
|
0.780667
|
0.281929
|
0.819563
|
0.847354
|
0.077893
|
C
|
a
|
0.817135
|
0.310771
|
0.165960
|
0.165289
|
0.839561
|
0.552440
|
0.104440
|
0.457922
|
0.376567
|
b
|
0.471089
|
0.816320
|
0.794785
|
0.183299
|
0.583441
|
0.751852
|
0.084048
|
0.306189
|
0.863428
|
c
|
0.094737
|
0.401595
|
0.706380
|
0.345283
|
0.453558
|
0.394212
|
0.885934
|
0.575093
|
0.203312
|
df_temp1 = df_temp.reset_index(level=1,col_level=1)
df_temp1
Big
|
|
D
|
E
|
F
|
Small
|
Lower
|
d
|
e
|
f
|
d
|
e
|
f
|
d
|
e
|
f
|
Upper
|
|
|
|
|
|
|
|
|
|
|
A
|
a
|
0.793149
|
0.741857
|
0.564179
|
0.709766
|
0.475654
|
0.078749
|
0.005152
|
0.530341
|
0.772194
|
A
|
b
|
0.555399
|
0.350251
|
0.602172
|
0.238758
|
0.498534
|
0.420549
|
0.540857
|
0.528240
|
0.247767
|
A
|
c
|
0.241440
|
0.840120
|
0.418478
|
0.953688
|
0.708561
|
0.152443
|
0.649509
|
0.861890
|
0.372687
|
B
|
a
|
0.010432
|
0.650559
|
0.813984
|
0.212479
|
0.789201
|
0.744064
|
0.539185
|
0.710612
|
0.361783
|
B
|
b
|
0.012562
|
0.032409
|
0.451925
|
0.155730
|
0.722682
|
0.155294
|
0.192574
|
0.669353
|
0.615208
|
B
|
c
|
0.835537
|
0.353932
|
0.136030
|
0.640238
|
0.780667
|
0.281929
|
0.819563
|
0.847354
|
0.077893
|
C
|
a
|
0.817135
|
0.310771
|
0.165960
|
0.165289
|
0.839561
|
0.552440
|
0.104440
|
0.457922
|
0.376567
|
C
|
b
|
0.471089
|
0.816320
|
0.794785
|
0.183299
|
0.583441
|
0.751852
|
0.084048
|
0.306189
|
0.863428
|
C
|
c
|
0.094737
|
0.401595
|
0.706380
|
0.345283
|
0.453558
|
0.394212
|
0.885934
|
0.575093
|
0.203312
|
df_temp1.columns
#看到的确插入了level2
MultiIndex([( '', 'Lower'),('D', 'd'),('D', 'e'),('D', 'f'),('E', 'd'),('E', 'e'),('E', 'f'),('F', 'd'),('F', 'e'),('F', 'f')],names=['Big', 'Small'])
df_temp1.index
#最内层索引被移出
Index(['A', 'A', 'A', 'B', 'B', 'B', 'C', 'C', 'C'], dtype='object', name='Upper')
4. rename_axis和rename
rename_axis是针对多级索引的方法,作用是修改某一层的索引名,而不是索引标签
df_temp.rename_axis(index={'Lower':'LowerLower'},columns={'Big':'BigBig'})
|
BigBig
|
D
|
E
|
F
|
|
Small
|
d
|
e
|
f
|
d
|
e
|
f
|
d
|
e
|
f
|
Upper
|
LowerLower
|
|
|
|
|
|
|
|
|
|
A
|
a
|
0.322856
|
0.303286
|
0.510177
|
0.677119
|
0.539872
|
0.008080
|
0.155318
|
0.687972
|
0.211114
|
b
|
0.788099
|
0.099715
|
0.033253
|
0.784997
|
0.822390
|
0.681439
|
0.226472
|
0.964799
|
0.622567
|
c
|
0.206164
|
0.417146
|
0.169923
|
0.764059
|
0.387532
|
0.741304
|
0.156683
|
0.105008
|
0.636024
|
B
|
a
|
0.154204
|
0.489378
|
0.026083
|
0.023313
|
0.392803
|
0.537590
|
0.423063
|
0.892903
|
0.083580
|
b
|
0.516691
|
0.648889
|
0.210534
|
0.648650
|
0.492758
|
0.013937
|
0.618279
|
0.517379
|
0.346631
|
c
|
0.471466
|
0.389771
|
0.358777
|
0.755062
|
0.813432
|
0.440888
|
0.351122
|
0.004274
|
0.268696
|
C
|
a
|
0.095295
|
0.117381
|
0.472925
|
0.710563
|
0.521524
|
0.486703
|
0.530199
|
0.453099
|
0.465785
|
b
|
0.478185
|
0.465777
|
0.916301
|
0.135971
|
0.868624
|
0.789809
|
0.959583
|
0.689099
|
0.379456
|
c
|
0.664374
|
0.197314
|
0.382233
|
0.798935
|
0.642967
|
0.933398
|
0.827343
|
0.667308
|
0.309584
|
rename方法用于修改列或者行索引标签,而不是索引名:
df_temp.rename(index={'A':'T','a':'d'},columns={'e':'changed_e'}).head()
|
Big
|
D
|
E
|
F
|
|
Small
|
d
|
changed_e
|
f
|
d
|
changed_e
|
f
|
d
|
changed_e
|
f
|
Upper
|
Lower
|
|
|
|
|
|
|
|
|
|
T
|
d
|
0.793149
|
0.741857
|
0.564179
|
0.709766
|
0.475654
|
0.078749
|
0.005152
|
0.530341
|
0.772194
|
b
|
0.555399
|
0.350251
|
0.602172
|
0.238758
|
0.498534
|
0.420549
|
0.540857
|
0.528240
|
0.247767
|
c
|
0.241440
|
0.840120
|
0.418478
|
0.953688
|
0.708561
|
0.152443
|
0.649509
|
0.861890
|
0.372687
|
B
|
d
|
0.010432
|
0.650559
|
0.813984
|
0.212479
|
0.789201
|
0.744064
|
0.539185
|
0.710612
|
0.361783
|
b
|
0.012562
|
0.032409
|
0.451925
|
0.155730
|
0.722682
|
0.155294
|
0.192574
|
0.669353
|
0.615208
|
四、常用索引型函数
1. where函数
当对条件为False的单元进行填充:
df.head()
|
#
|
Name
|
Type 1
|
Type 2
|
Total
|
HP
|
Attack
|
Defense
|
Sp. Atk
|
Sp. Def
|
Speed
|
Generation
|
Legendary
|
0
|
1
|
Bulbasaur
|
Grass
|
Poison
|
318
|
45
|
49
|
49
|
65
|
65
|
45
|
1
|
False
|
1
|
2
|
Ivysaur
|
Grass
|
Poison
|
405
|
60
|
62
|
63
|
80
|
80
|
60
|
1
|
False
|
2
|
3
|
Venusaur
|
Grass
|
Poison
|
525
|
80
|
82
|
83
|
100
|
100
|
80
|
1
|
False
|
3
|
3
|
VenusaurMega Venusaur
|
Grass
|
Poison
|
625
|
80
|
100
|
123
|
122
|
120
|
80
|
1
|
False
|
4
|
4
|
Charmander
|
Fire
|
NaN
|
309
|
39
|
52
|
43
|
60
|
50
|
65
|
1
|
False
|
df.where(df['Gender']=='M').head()
#条件为 false 的行全部被设置为NaN(缺失值)
|
School
|
Class
|
Gender
|
Address
|
Height
|
Weight
|
Math
|
Physics
|
ID
|
|
|
|
|
|
|
|
|
1101
|
S_1
|
C_1
|
M
|
street_1
|
173.0
|
63.0
|
34.0
|
A+
|
1102
|
NaN
|
NaN
|
NaN
|
NaN
|
NaN
|
NaN
|
NaN
|
NaN
|
1103
|
S_1
|
C_1
|
M
|
street_2
|
186.0
|
82.0
|
87.2
|
B+
|
1104
|
NaN
|
NaN
|
NaN
|
NaN
|
NaN
|
NaN
|
NaN
|
NaN
|
1105
|
NaN
|
NaN
|
NaN
|
NaN
|
NaN
|
NaN
|
NaN
|
NaN
|
通过这种方法筛选结果和[]操作符的结果完全一致:
df.where(df['Gender']=='M').head() #dropna()删除缺失值
|
School
|
Class
|
Gender
|
Address
|
Height
|
Weight
|
Math
|
Physics
|
ID
|
|
|
|
|
|
|
|
|
1101
|
S_1
|
C_1
|
M
|
street_1
|
173.0
|
63.0
|
34.0
|
A+
|
1102
|
NaN
|
NaN
|
NaN
|
NaN
|
NaN
|
NaN
|
NaN
|
NaN
|
1103
|
S_1
|
C_1
|
M
|
street_2
|
186.0
|
82.0
|
87.2
|
B+
|
1104
|
NaN
|
NaN
|
NaN
|
NaN
|
NaN
|
NaN
|
NaN
|
NaN
|
1105
|
NaN
|
NaN
|
NaN
|
NaN
|
NaN
|
NaN
|
NaN
|
NaN
|
第一个参数为布尔条件,第二个参数为填充值:
df.where(df['Gender']=='M',np.random.rand(df.shape[0],df.shape[1])).head() #服从“0~1”均匀分布的随机样本,值范围是[0,1)
|
School
|
Class
|
Gender
|
Address
|
Height
|
Weight
|
Math
|
Physics
|
ID
|
|
|
|
|
|
|
|
|
1101
|
S_1
|
C_1
|
M
|
street_1
|
173.000000
|
63.000000
|
34.000000
|
A+
|
1102
|
0.637422
|
0.646786
|
0.361462
|
0.355069
|
0.023905
|
0.773924
|
0.973148
|
0.807385
|
1103
|
S_1
|
C_1
|
M
|
street_2
|
186.000000
|
82.000000
|
87.200000
|
B+
|
1104
|
0.686135
|
0.385697
|
0.967066
|
0.949422
|
0.868410
|
0.266690
|
0.847499
|
0.77188
|
1105
|
0.353921
|
0.743227
|
0.761644
|
0.119467
|
0.403684
|
0.798981
|
0.294869
|
0.891606
|
2. mask函数
mask函数与where功能上相反,其余完全一致,即对条件为True的单元进行填充
df.mask(df['Gender']=='M').dropna().head()
|
School
|
Class
|
Gender
|
Address
|
Height
|
Weight
|
Math
|
Physics
|
ID
|
|
|
|
|
|
|
|
|
1102
|
S_1
|
C_1
|
F
|
street_2
|
192.0
|
73.0
|
32.5
|
B+
|
1104
|
S_1
|
C_1
|
F
|
street_2
|
167.0
|
81.0
|
80.4
|
B-
|
1105
|
S_1
|
C_1
|
F
|
street_4
|
159.0
|
64.0
|
84.8
|
B+
|
1202
|
S_1
|
C_2
|
F
|
street_4
|
176.0
|
94.0
|
63.5
|
B-
|
1204
|
S_1
|
C_2
|
F
|
street_5
|
162.0
|
63.0
|
33.8
|
B
|
df.mask(df['Gender']=='M',np.random.rand(df.shape[0],df.shape[1])).head()
|
School
|
Class
|
Gender
|
Address
|
Height
|
Weight
|
Math
|
Physics
|
ID
|
|
|
|
|
|
|
|
|
1101
|
0.997311
|
0.978119
|
0.388461
|
0.0658261
|
0.819698
|
0.599252
|
0.425240
|
0.577825
|
1102
|
S_1
|
C_1
|
F
|
street_2
|
192.000000
|
73.000000
|
32.500000
|
B+
|
1103
|
0.840625
|
0.68047
|
0.830757
|
0.0382815
|
0.898461
|
0.005448
|
0.844379
|
0.64525
|
1104
|
S_1
|
C_1
|
F
|
street_2
|
167.000000
|
81.000000
|
80.400000
|
B-
|
1105
|
S_1
|
C_1
|
F
|
street_4
|
159.000000
|
64.000000
|
84.800000
|
B+
|
3. query函数
df.head()
|
School
|
Class
|
Gender
|
Address
|
Height
|
Weight
|
Math
|
Physics
|
ID
|
|
|
|
|
|
|
|
|
1101
|
S_1
|
C_1
|
M
|
street_1
|
173
|
63
|
34.0
|
A+
|
1102
|
S_1
|
C_1
|
F
|
street_2
|
192
|
73
|
32.5
|
B+
|
1103
|
S_1
|
C_1
|
M
|
street_2
|
186
|
82
|
87.2
|
B+
|
1104
|
S_1
|
C_1
|
F
|
street_2
|
167
|
81
|
80.4
|
B-
|
1105
|
S_1
|
C_1
|
F
|
street_4
|
159
|
64
|
84.8
|
B+
|
query函数中的布尔表达式中,下面的符号都是合法的:行列索引名、字符串、and/not/or/&/|/~/not in/in/==/!=、四则运算符
df.query('(Address in ["street_6","street_7"])&(Weight>(70+10))&(ID in [1303,2304,2402])')
|
School
|
Class
|
Gender
|
Address
|
Height
|
Weight
|
Math
|
Physics
|
ID
|
|
|
|
|
|
|
|
|
1303
|
S_1
|
C_3
|
M
|
street_7
|
188
|
82
|
49.7
|
B
|
2304
|
S_2
|
C_3
|
F
|
street_6
|
164
|
81
|
95.5
|
A-
|
2402
|
S_2
|
C_4
|
M
|
street_7
|
166
|
82
|
48.7
|
B
|
五、重复元素处理
1. duplicated方法
该方法返回了是否重复的布尔列表
df.duplicated('Gender').head() #与前面相比是否重复
ID
1101 False
1102 False
1103 True
1104 True
1105 True
dtype: bool
可选参数keep默认为first,即首次出现设为不重复,若为last,则最后一次设为不重复,若为False,则所有重复项为False
df.duplicated('Gender',keep='last').tail() #与后面相比是否重复
ID
2401 True
2402 False
2403 True
2404 True
2405 False
dtype: bool
df.duplicated('Gender',keep=False).head() #F,M都是重复项
ID
1101 True
1102 True
1103 True
1104 True
1105 True
dtype: bool
2. drop_duplicates方法
从名字上看出为剔除重复项,这在后面章节中的分组操作中可能是有用的,例如需要保留每组的第一个值:
df.drop_duplicates('Class')
|
School
|
Class
|
Gender
|
Address
|
Height
|
Weight
|
Math
|
Physics
|
ID
|
|
|
|
|
|
|
|
|
1101
|
S_1
|
C_1
|
M
|
street_1
|
173
|
63
|
34.0
|
A+
|
1201
|
S_1
|
C_2
|
M
|
street_5
|
188
|
68
|
97.0
|
A-
|
1301
|
S_1
|
C_3
|
M
|
street_4
|
161
|
68
|
31.5
|
B+
|
2401
|
S_2
|
C_4
|
F
|
street_2
|
192
|
62
|
45.3
|
A
|
参数与duplicate函数类似:
df.drop_duplicates('Class',keep='last') #后往前比较,保留第一个
|
School
|
Class
|
Gender
|
Address
|
Height
|
Weight
|
Math
|
Physics
|
ID
|
|
|
|
|
|
|
|
|
2105
|
S_2
|
C_1
|
M
|
street_4
|
170
|
81
|
34.2
|
A
|
2205
|
S_2
|
C_2
|
F
|
street_7
|
183
|
76
|
85.4
|
B
|
2305
|
S_2
|
C_3
|
M
|
street_4
|
187
|
73
|
48.9
|
B
|
2405
|
S_2
|
C_4
|
F
|
street_6
|
193
|
54
|
47.6
|
B
|
在传入多列时等价于将多列共同视作一个多级索引,比较重复项:
df.drop_duplicates(['School','Class']) #删除索引组合重复项,保留第一个
|
School
|
Class
|
Gender
|
Address
|
Height
|
Weight
|
Math
|
Physics
|
ID
|
|
|
|
|
|
|
|
|
1101
|
S_1
|
C_1
|
M
|
street_1
|
173
|
63
|
34.0
|
A+
|
1201
|
S_1
|
C_2
|
M
|
street_5
|
188
|
68
|
97.0
|
A-
|
1301
|
S_1
|
C_3
|
M
|
street_4
|
161
|
68
|
31.5
|
B+
|
2101
|
S_2
|
C_1
|
M
|
street_7
|
174
|
84
|
83.3
|
C
|
2201
|
S_2
|
C_2
|
M
|
street_5
|
193
|
100
|
39.1
|
B
|
2301
|
S_2
|
C_3
|
F
|
street_4
|
157
|
78
|
72.3
|
B+
|
2401
|
S_2
|
C_4
|
F
|
street_2
|
192
|
62
|
45.3
|
A
|
六、抽样函数
这里的抽样函数指的就是sample函数
(a)n为样本量
df.sample(n=5)
|
School
|
Class
|
Gender
|
Address
|
Height
|
Weight
|
Math
|
Physics
|
ID
|
|
|
|
|
|
|
|
|
1301
|
S_1
|
C_3
|
M
|
street_4
|
161
|
68
|
31.5
|
B+
|
1105
|
S_1
|
C_1
|
F
|
street_4
|
159
|
64
|
84.8
|
B+
|
1101
|
S_1
|
C_1
|
M
|
street_1
|
173
|
63
|
34.0
|
A+
|
2304
|
S_2
|
C_3
|
F
|
street_6
|
164
|
81
|
95.5
|
A-
|
2402
|
S_2
|
C_4
|
M
|
street_7
|
166
|
82
|
48.7
|
B
|
(b)frac为抽样比
df.sample(frac=0.2)
|
School
|
Class
|
Gender
|
Address
|
Height
|
Weight
|
Math
|
Physics
|
ID
|
|
|
|
|
|
|
|
|
1204
|
S_1
|
C_2
|
F
|
street_5
|
162
|
63
|
33.8
|
B
|
2302
|
S_2
|
C_3
|
M
|
street_5
|
171
|
88
|
32.7
|
A
|
1205
|
S_1
|
C_2
|
F
|
street_6
|
167
|
63
|
68.4
|
B-
|
1203
|
S_1
|
C_2
|
M
|
street_6
|
160
|
53
|
58.8
|
A+
|
2305
|
S_2
|
C_3
|
M
|
street_4
|
187
|
73
|
48.9
|
B
|
1305
|
S_1
|
C_3
|
F
|
street_5
|
187
|
69
|
61.7
|
B-
|
2102
|
S_2
|
C_1
|
F
|
street_6
|
161
|
61
|
50.6
|
B+
|
(c)replace为是否放回 (放回:相当于总体不变,抽到的还会抽到,类比概率论)
display(df.sample(n=df.shape[0],replace=True).head())
display(df)
|
School
|
Class
|
Gender
|
Address
|
Height
|
Weight
|
Math
|
Physics
|
ID
|
|
|
|
|
|
|
|
|
1302
|
S_1
|
C_3
|
F
|
street_1
|
175
|
57
|
87.7
|
A-
|
1202
|
S_1
|
C_2
|
F
|
street_4
|
176
|
94
|
63.5
|
B-
|
1101
|
S_1
|
C_1
|
M
|
street_1
|
173
|
63
|
34.0
|
A+
|
2202
|
S_2
|
C_2
|
F
|
street_7
|
194
|
77
|
68.5
|
B+
|
2302
|
S_2
|
C_3
|
M
|
street_5
|
171
|
88
|
32.7
|
A
|
|
School
|
Class
|
Gender
|
Address
|
Height
|
Weight
|
Math
|
Physics
|
ID
|
|
|
|
|
|
|
|
|
1101
|
S_1
|
C_1
|
M
|
street_1
|
173
|
63
|
34.0
|
A+
|
1102
|
S_1
|
C_1
|
F
|
street_2
|
192
|
73
|
32.5
|
B+
|
1103
|
S_1
|
C_1
|
M
|
street_2
|
186
|
82
|
87.2
|
B+
|
1104
|
S_1
|
C_1
|
F
|
street_2
|
167
|
81
|
80.4
|
B-
|
1105
|
S_1
|
C_1
|
F
|
street_4
|
159
|
64
|
84.8
|
B+
|
1201
|
S_1
|
C_2
|
M
|
street_5
|
188
|
68
|
97.0
|
A-
|
1202
|
S_1
|
C_2
|
F
|
street_4
|
176
|
94
|
63.5
|
B-
|
1203
|
S_1
|
C_2
|
M
|
street_6
|
160
|
53
|
58.8
|
A+
|
1204
|
S_1
|
C_2
|
F
|
street_5
|
162
|
63
|
33.8
|
B
|
1205
|
S_1
|
C_2
|
F
|
street_6
|
167
|
63
|
68.4
|
B-
|
1301
|
S_1
|
C_3
|
M
|
street_4
|
161
|
68
|
31.5
|
B+
|
1302
|
S_1
|
C_3
|
F
|
street_1
|
175
|
57
|
87.7
|
A-
|
1303
|
S_1
|
C_3
|
M
|
street_7
|
188
|
82
|
49.7
|
B
|
1304
|
S_1
|
C_3
|
M
|
street_2
|
195
|
70
|
85.2
|
A
|
1305
|
S_1
|
C_3
|
F
|
street_5
|
187
|
69
|
61.7
|
B-
|
2101
|
S_2
|
C_1
|
M
|
street_7
|
174
|
84
|
83.3
|
C
|
2102
|
S_2
|
C_1
|
F
|
street_6
|
161
|
61
|
50.6
|
B+
|
2103
|
S_2
|
C_1
|
M
|
street_4
|
157
|
61
|
52.5
|
B-
|
2104
|
S_2
|
C_1
|
F
|
street_5
|
159
|
97
|
72.2
|
B+
|
2105
|
S_2
|
C_1
|
M
|
street_4
|
170
|
81
|
34.2
|
A
|
2201
|
S_2
|
C_2
|
M
|
street_5
|
193
|
100
|
39.1
|
B
|
2202
|
S_2
|
C_2
|
F
|
street_7
|
194
|
77
|
68.5
|
B+
|
2203
|
S_2
|
C_2
|
M
|
street_4
|
155
|
91
|
73.8
|
A+
|
2204
|
S_2
|
C_2
|
M
|
street_1
|
175
|
74
|
47.2
|
B-
|
2205
|
S_2
|
C_2
|
F
|
street_7
|
183
|
76
|
85.4
|
B
|
2301
|
S_2
|
C_3
|
F
|
street_4
|
157
|
78
|
72.3
|
B+
|
2302
|
S_2
|
C_3
|
M
|
street_5
|
171
|
88
|
32.7
|
A
|
2303
|
S_2
|
C_3
|
F
|
street_7
|
190
|
99
|
65.9
|
C
|
2304
|
S_2
|
C_3
|
F
|
street_6
|
164
|
81
|
95.5
|
A-
|
2305
|
S_2
|
C_3
|
M
|
street_4
|
187
|
73
|
48.9
|
B
|
2401
|
S_2
|
C_4
|
F
|
street_2
|
192
|
62
|
45.3
|
A
|
2402
|
S_2
|
C_4
|
M
|
street_7
|
166
|
82
|
48.7
|
B
|
2403
|
S_2
|
C_4
|
F
|
street_6
|
158
|
60
|
59.7
|
B+
|
2404
|
S_2
|
C_4
|
F
|
street_2
|
160
|
84
|
67.7
|
B
|
2405
|
S_2
|
C_4
|
F
|
street_6
|
193
|
54
|
47.6
|
B
|
df.sample(n=35,replace=True).index.is_unique
False
(d)axis为抽样维度,默认为0,即抽行
df.sample(n=4,axis=1).head() #抽列
|
Gender
|
Class
|
Math
|
Weight
|
ID
|
|
|
|
|
1101
|
M
|
C_1
|
34.0
|
63
|
1102
|
F
|
C_1
|
32.5
|
73
|
1103
|
M
|
C_1
|
87.2
|
82
|
1104
|
F
|
C_1
|
80.4
|
81
|
1105
|
F
|
C_1
|
84.8
|
64
|
(e)weights为样本权重,自动归一化
df.sample(n=3,weights=np.random.rand(df.shape[0])).head()
|
School
|
Class
|
Gender
|
Address
|
Height
|
Weight
|
Math
|
Physics
|
ID
|
|
|
|
|
|
|
|
|
1105
|
S_1
|
C_1
|
F
|
street_4
|
159
|
64
|
84.8
|
B+
|
2104
|
S_2
|
C_1
|
F
|
street_5
|
159
|
97
|
72.2
|
B+
|
1201
|
S_1
|
C_2
|
M
|
street_5
|
188
|
68
|
97.0
|
A-
|
#以某一列为权重,这在抽样理论中很常见
df.sample(n=3,weights=df['Math']).head()
|
School
|
Class
|
Gender
|
Address
|
Height
|
Weight
|
Math
|
Physics
|
ID
|
|
|
|
|
|
|
|
|
2402
|
S_2
|
C_4
|
M
|
street_7
|
166
|
82
|
48.7
|
B
|
1202
|
S_1
|
C_2
|
F
|
street_4
|
176
|
94
|
63.5
|
B-
|
2403
|
S_2
|
C_4
|
F
|
street_6
|
158
|
60
|
59.7
|
B+
|
七、问题与练习
1. 问题
【问题一】 如何更改列或行的顺序?
【问题二】 如果要选出DataFrame的某个子集,请给出尽可能多的方法实现。
query函数
【问题三】 单级索引能使用Slice对象吗?能的话怎么使用,请给出一个例子。
可以
【问题四】 索引设定中的所有方法分别适用于哪些场合?
【问题五】 如何快速找出某一列的缺失值所在索引?
df.index[np.where(np.isnan(df))[0]] 行 df.columns[np.where(np.isnan(df))[1]] 列
2. 练习
【练习一】 现有一份关于UFO的数据集,请解决下列问题:
pd.read_csv('data/UFO.csv').head()
|
datetime
|
shape
|
duration (seconds)
|
latitude
|
longitude
|
0
|
10/10/1949 20:30
|
cylinder
|
2700.0
|
29.883056
|
-97.941111
|
1
|
10/10/1949 21:00
|
light
|
7200.0
|
29.384210
|
-98.581082
|
2
|
10/10/1955 17:00
|
circle
|
20.0
|
53.200000
|
-2.916667
|
3
|
10/10/1956 21:00
|
circle
|
20.0
|
28.978333
|
-96.645833
|
4
|
10/10/1960 20:00
|
light
|
900.0
|
21.418056
|
-157.803611
|
(a)在所有被观测时间超过60s的时间中,哪个形状最多?
(b)对经纬度进行划分:-180°至180°以30°为一个划分,-90°至90°以18°为一个划分,请问哪个区域中报告的UFO事件数量最多?
data=pd.read_csv('data/UFO.csv')
data[data['duration (seconds)']>60]['shape'].value_counts().index[0]#data.rename(columns={'duration (seconds)':'duration'},inplace=True)
#data['duration'].astype('float')
#data.query('duration > 60')['shape'].value_counts().index[0] #query函数需格外注意列名,数据类型
'light'
#math_interval = pd.cut(df['Math'],bins=[0,40,60,80,100])
la=np.linspace(-90,90,11).tolist()
lo=np.linspace(-180,180,13).tolist()
data['laCut']=pd.cut(data['latitude'],bins=la)
data['loCut']=pd.cut(data['latitude'],bins=lo)
data.head()
|
datetime
|
shape
|
duration (seconds)
|
latitude
|
longitude
|
lati
|
laCut
|
loCut
|
0
|
10/10/1949 20:30
|
cylinder
|
2700.0
|
29.883056
|
-97.941111
|
(18.0, 36.0]
|
(18.0, 36.0]
|
(0.0, 30.0]
|
1
|
10/10/1949 21:00
|
light
|
7200.0
|
29.384210
|
-98.581082
|
(18.0, 36.0]
|
(18.0, 36.0]
|
(0.0, 30.0]
|
2
|
10/10/1955 17:00
|
circle
|
20.0
|
53.200000
|
-2.916667
|
(36.0, 54.0]
|
(36.0, 54.0]
|
(30.0, 60.0]
|
3
|
10/10/1956 21:00
|
circle
|
20.0
|
28.978333
|
-96.645833
|
(18.0, 36.0]
|
(18.0, 36.0]
|
(0.0, 30.0]
|
4
|
10/10/1960 20:00
|
light
|
900.0
|
21.418056
|
-157.803611
|
(18.0, 36.0]
|
(18.0, 36.0]
|
(0.0, 30.0]
|
data.set_index(['laCut','loCut']).index.value_counts().index[0]
(Interval(36.0, 54.0, closed='right'), Interval(30.0, 60.0, closed='right'))
【练习二】 现有一份关于口袋妖怪的数据集,请解决下列问题:
df=pd.read_csv('data/Pokemon.csv')
df.head()
|
#
|
Name
|
Type 1
|
Type 2
|
Total
|
HP
|
Attack
|
Defense
|
Sp. Atk
|
Sp. Def
|
Speed
|
Generation
|
Legendary
|
0
|
1
|
Bulbasaur
|
Grass
|
Poison
|
318
|
45
|
49
|
49
|
65
|
65
|
45
|
1
|
False
|
1
|
2
|
Ivysaur
|
Grass
|
Poison
|
405
|
60
|
62
|
63
|
80
|
80
|
60
|
1
|
False
|
2
|
3
|
Venusaur
|
Grass
|
Poison
|
525
|
80
|
82
|
83
|
100
|
100
|
80
|
1
|
False
|
3
|
3
|
VenusaurMega Venusaur
|
Grass
|
Poison
|
625
|
80
|
100
|
123
|
122
|
120
|
80
|
1
|
False
|
4
|
4
|
Charmander
|
Fire
|
NaN
|
309
|
39
|
52
|
43
|
60
|
50
|
65
|
1
|
False
|
(a)双属性的Pokemon占总体比例的多少?
(b)在所有种族值(Total)不小于580的Pokemon中,非神兽(Legendary=False)的比例为多少?
(c)在第一属性为格斗系(Fighting)的Pokemon中,物攻排名前三高的是哪些?
(d)请问六项种族指标(HP、物攻、特攻、物防、特防、速度)极差的均值最大的是哪个属性(只考虑第一属性,且均值是对属性而言)?
(e)哪个属性(只考虑第一属性)的神兽比例最高?该属性神兽的种族值也是最高的吗?
(a)
df['Type 2'].count()/df.shape[0]
0.5175
(b)
df[(df['Total']>580)]['Legendary'].value_counts(normalize=True) #统计并标准化
True 0.511111
False 0.488889
Name: Legendary, dtype: float64
©
df[df['Type 1']=='Fighting'].sort_values(by='Attack',ascending=False).iloc[:3]
|
#
|
Name
|
Type 1
|
Type 2
|
Total
|
HP
|
Attack
|
Defense
|
Sp. Atk
|
Sp. Def
|
Speed
|
Generation
|
Legendary
|
498
|
448
|
LucarioMega Lucario
|
Fighting
|
Steel
|
625
|
70
|
145
|
88
|
140
|
70
|
112
|
4
|
False
|
594
|
534
|
Conkeldurr
|
Fighting
|
NaN
|
505
|
105
|
140
|
95
|
55
|
65
|
45
|
5
|
False
|
74
|
68
|
Machamp
|
Fighting
|
NaN
|
505
|
90
|
130
|
80
|
65
|
85
|
55
|
1
|
False
|
(d)
df['range'] = df.iloc[:,5:11].max(axis=1)-df.iloc[:,5:11].min(axis=1) #6属性极差
attribute = df[['Type 1','range']].set_index('Type 1') #type1索引,极差列
max_range = 0
result = ''
for i in attribute.index.unique():temp = pd.to_numeric(attribute.loc[i,:].mean(), errors='coerce')if temp.values[0] > max_range:max_range = temp.values[0]result = i #找出最大均值的type
print(result,max_range)
Steel 82.18518518518519
(e)
df.query('Legendary in [True]')['Type 1'].value_counts(normalize=True).index[0]
'Psychic'
attribute = df.query('Legendary == True')[['Type 1','Total']].set_index('Type 1')
max_value = 0
result = ''
for i in attribute.index.unique():temp = float(attribute.loc[i,:].mean()) #由于最后Fairy系圣兽只有一只,取均值会直接变成float类型,可以将所有temp统一变成float进行比较if temp > max_value:max_value = tempresult = i
result
'Normal'
第2章 panda 索引相关推荐
- mysql索引和数据完整性答案_第5章MySQL索引与完整性约束.ppt
第5章MySQL索引与完整性约束 5.3.4 CHECK完整性约束 CHECK完整性约束在创建表的时候定义.可以定义为列完整性约束,也可定义为表完整性约束. 语法格式: CHECK(expr) [例5 ...
- 【MySQL高级篇】第06章_索引的数据结构
第06章_索引的数据结构 1. 为什么使用索引 索引是存储引擎用于快速找到数据记录的一种数据结构,就好比一本教科书的目录部分,通过目录中找到对应文章的页码,便可快速定位到需要的文章.MySQL中也是一 ...
- Timo学习笔记 :Python基础教程(第三版)第四章 当索引行不通时
第四章 当索引行不通时 Timo学习笔记 :Python基础教程(第三版)第三章 使用字符串 这是word编辑的最后一章笔记,第五章开始将直接用这个模板记录. 本章笔记很少,也很简单.很多方法可以到要 ...
- Oracle 9i 10g编程艺术-深入数据库体系结构——第11章:索引
第11章 索引 索引是应用设计和开发的一个重要方面.如果有太多的索引,DML的性能就会受到影响.如果索引太少,又会影响查询(包括插入.更新和删除)的性能.要找 ...
- mysql 索引 ppt_第三章 MySQL索引.ppt
第三章 MySQL索引 MYSQL索引 课程介绍 本课程主要介绍了MySQL数据库的表达式运算符.MySQL函数,索引的用法.存储过程的使用,视图和触发器等方面的知识, 以应用为目标,具有较强的实践性 ...
- 第 08 章_索引的创建与设计原则
第 08 章_索引的创建与设计原则 1. 索引的声明与使用 1. 1 索引的分类 MySQL的索引包括普通索引.唯一性索引.全文索引.单列索引.多列索引和空间索引等. 从功能逻辑上说,索引主要有 4 ...
- 第 1 章 SQL 索引剖析
文章目录 叶子节点:双向链表 B-树:平衡树 慢索引:第一部分 关于索引,最常见的解释就是:索引可以提高查询的速度.虽然这种说法指出了索引最重要的一点,但是它没有解释索引的本质.本章将会进一步描述索引 ...
- (第十五集——第3章)索引原理与慢查询优化
介绍 为何要有索引? 一般的应用系统,读写比例在10:1左右,而且插入操作和一般的更新操作很少出现性能问题,在生产环境中,我们遇到最多的,也是最容易出问题的,还是一些复杂的查询操作,因此对查询语句的优 ...
- 数据库索引设计与优化(笔记):第5章 前瞻性索引设计(主要是QUBE,快速上限估算法)
这一章就比较复杂了 基本问题法(BQ) 对每个SELECT语句,按照下列步骤来考虑 问题:是否有一个已存在的或者计划中的索引包含了WHERE子句所引用的所有列(一个半宽索引)? 如果答案是否,则首先考 ...
- 第五章 MongoDb索引优化 5.4
5.唯一索引 先看看之前创建的,rsc索引: index --------- { "name" : "rsc_1" , "ns" : &qu ...
最新文章
- c++ 浮点数转换成字符串_Python如何处理数据?如何把数据转换成我们想要的?三种处理方法...
- View和View的参数传递
- SoapUI 测试http接口实战
- Light OJ 1316 A Wedding Party 最短路+状态压缩DP
- node.js开发中使用Node Supervisor实现监测文件修改并自动重启应用(转)
- 你的手机浏览器不支持webgle_不支持n79频段5G手机不能买?OPPO Reno3全频覆盖消除后顾之忧...
- FFmpeg源代码简单分析:avformat_alloc_output_context2()
- MySQL之视图、触发器、事务、存储过程、函数
- 和ufs_宏旺半导体告诉你手机eMMC和UFS到底是什么意思?有什么区别?
- java+mysql实现图书管理系统
- nero刻录软件linux,下载:Linux平台刻录工具NeroLINUX 3.5.2.0版
- Dialog确认按钮不dismiss
- cad2012打开后闪退_windows7打不开CAD2012出现闪退的解决方法
- Selenium根据输入的公司名称来爬取公司的详细信息!
- 前端-Excel在线预览
- 实现一个英文词典的功能
- php水果百科动态网站毕业设计-附源码060917
- vue脚手架vue-cli的卸载与安装方式
- 玩转 Java8 Stream 流,常用方法,详细用法大合集!
- Fisher信息量与Cramer-Rao不等式
热门文章
- Python Selenium 抓取Shadow Dom内部元素方法更新
- 遇到 oracle 错误 904,EXP-00008: 遇到 Oracle 错误 904
- css里给文字加下划线代码,css给文字加下划线
- 服务器装系统不识别硬盘分区,u盘装系统时找不到硬盘分区解决方法
- win10配置计算机时强制关机,win10系统强制关机后开不了机了的解决方案
- 坚果pro2刷回官方_锤子坚果Pro2刷魔趣、刷回官方踩坑记
- 机器学习5——决策树
- 【树莓派】树莓派4B新手篇:安装官网Raspbian Buster系统及基础配置
- 前端和后端开发人员比例_前端和后端开发人员应该结交朋友
- Git LFS 初探