Pandas 数据读取,索引,切片,计算,列整合,过滤,最值

文章目录

  • Pandas 数据读取,索引,切片,计算,列整合,过滤,最值
    • 1. read_csv 函数
    • 2. DataFrame 数据结构的常用属性
    • 2. Pandas 取数据
    • 3. Pandas 数据切片
    • 4. 按列取值(很重要)
    • 5. 按列过滤
    • 6. 简单列数据处理
    • 7. 类组合并添加到原 DataFrame
    • 8. 最值计算

1. read_csv 函数

import pandas as pd'''xxx.csv 文件就是以 , 分割的二维数据在 Pandas 中,核心数据结构就是 DataFrame,类似于 NumPy 的 Ndaaray(矩阵)DataFrame 数据的 dtypes 属性可以显示 .csv 文件每一列数据的数据类型,在 Pandas 中,整数就是 int64 类型;小数就是 float64 类型;字符串就是 object 类型read_csv 函数很重要哦!!!
'''
data = pd.read_csv('food_info.csv')
print(type(data))
print(data.dtypes)
print(help(pd.read_csv))
<class 'pandas.core.frame.DataFrame'>
NDB_No               int64
Shrt_Desc           object
Water_(g)          float64
Energ_Kcal           int64
Protein_(g)        float64
Lipid_Tot_(g)      float64
Ash_(g)            float64
Carbohydrt_(g)     float64
Fiber_TD_(g)       float64
Sugar_Tot_(g)      float64
Calcium_(mg)       float64
Iron_(mg)          float64
Magnesium_(mg)     float64
Phosphorus_(mg)    float64
Potassium_(mg)     float64
Sodium_(mg)        float64
Zinc_(mg)          float64
Copper_(mg)        float64
Manganese_(mg)     float64
Selenium_(mcg)     float64
Vit_C_(mg)         float64
Thiamin_(mg)       float64
Riboflavin_(mg)    float64
Niacin_(mg)        float64
Vit_B6_(mg)        float64
Vit_B12_(mcg)      float64
Vit_A_IU           float64
Vit_A_RAE          float64
Vit_E_(mg)         float64
Vit_D_mcg          float64
Vit_D_IU           float64
Vit_K_(mcg)        float64
FA_Sat_(g)         float64
FA_Mono_(g)        float64
FA_Poly_(g)        float64
Cholestrl_(mg)     float64
dtype: object
Help on function read_csv in module pandas.io.parsers:read_csv(filepath_or_buffer, sep=',', delimiter=None, header='infer', names=None, index_col=None, usecols=None, squeeze=False, prefix=None, mangle_dupe_cols=True, dtype=None, engine=None, converters=None, true_values=None, false_values=None, skipinitialspace=False, skiprows=None, nrows=None, na_values=None, keep_default_na=True, na_filter=True, verbose=False, skip_blank_lines=True, parse_dates=False, infer_datetime_format=False, keep_date_col=False, date_parser=None, dayfirst=False, iterator=False, chunksize=None, compression='infer', thousands=None, decimal=b'.', lineterminator=None, quotechar='"', quoting=0, escapechar=None, comment=None, encoding=None, dialect=None, tupleize_cols=None, error_bad_lines=True, warn_bad_lines=True, skipfooter=0, doublequote=True, delim_whitespace=False, low_memory=True, memory_map=False, float_precision=None)Read CSV (comma-separated) file into DataFrameAlso supports optionally iterating or breaking of the fileinto chunks.Additional help can be found in the `online docs for IO Tools<http://pandas.pydata.org/pandas-docs/stable/io.html>`_.Parameters----------filepath_or_buffer : str, pathlib.Path, py._path.local.LocalPath or any \object with a read() method (such as a file handle or StringIO)The string could be a URL. Valid URL schemes include http, ftp, s3, andfile. For file URLs, a host is expected. For instance, a local file couldbe file://localhost/path/to/table.csvsep : str, default ','Delimiter to use. If sep is None, the C engine cannot automatically detectthe separator, but the Python parsing engine can, meaning the latter willbe used and automatically detect the separator by Python's builtin sniffertool, ``csv.Sniffer``. In addition, separators longer than 1 character anddifferent from ``'\s+'`` will be interpreted as regular expressions andwill also force the use of the Python parsing engine. Note that regexdelimiters are prone to ignoring quoted data. Regex example: ``'\r\t'``delimiter : str, default ``None``Alternative argument name for sep.delim_whitespace : boolean, default FalseSpecifies whether or not whitespace (e.g. ``' '`` or ``'\t'``) will beused as the sep. Equivalent to setting ``sep='\s+'``. If this optionis set to True, nothing should be passed in for the ``delimiter``parameter... versionadded:: 0.18.1 support for the Python parser.header : int or list of ints, default 'infer'Row number(s) to use as the column names, and the start of thedata.  Default behavior is to infer the column names: if no namesare passed the behavior is identical to ``header=0`` and columnnames are inferred from the first line of the file, if columnnames are passed explicitly then the behavior is identical to``header=None``. Explicitly pass ``header=0`` to be able toreplace existing names. The header can be a list of integers thatspecify row locations for a multi-index on the columnse.g. [0,1,3]. Intervening rows that are not specified will beskipped (e.g. 2 in this example is skipped). Note that thisparameter ignores commented lines and empty lines if``skip_blank_lines=True``, so header=0 denotes the first line ofdata rather than the first line of the file.names : array-like, default NoneList of column names to use. If file contains no header row, then youshould explicitly pass header=None. Duplicates in this list will causea ``UserWarning`` to be issued.index_col : int or sequence or False, default NoneColumn to use as the row labels of the DataFrame. If a sequence is given, aMultiIndex is used. If you have a malformed file with delimiters at the endof each line, you might consider index_col=False to force pandas to _not_use the first column as the index (row names)usecols : list-like or callable, default NoneReturn a subset of the columns. If list-like, all elements must eitherbe positional (i.e. integer indices into the document columns) or stringsthat correspond to column names provided either by the user in `names` orinferred from the document header row(s). For example, a valid list-like`usecols` parameter would be [0, 1, 2] or ['foo', 'bar', 'baz']. Elementorder is ignored, so ``usecols=[0, 1]`` is the same as ``[1, 0]``.To instantiate a DataFrame from ``data`` with element order preserved use``pd.read_csv(data, usecols=['foo', 'bar'])[['foo', 'bar']]`` for columnsin ``['foo', 'bar']`` order or``pd.read_csv(data, usecols=['foo', 'bar'])[['bar', 'foo']]``for ``['bar', 'foo']`` order.If callable, the callable function will be evaluated against the columnnames, returning names where the callable function evaluates to True. Anexample of a valid callable argument would be ``lambda x: x.upper() in['AAA', 'BBB', 'DDD']``. Using this parameter results in much fasterparsing time and lower memory usage.squeeze : boolean, default FalseIf the parsed data only contains one column then return a Seriesprefix : str, default NonePrefix to add to column numbers when no header, e.g. 'X' for X0, X1, ...mangle_dupe_cols : boolean, default TrueDuplicate columns will be specified as 'X', 'X.1', ...'X.N', rather than'X'...'X'. Passing in False will cause data to be overwritten if thereare duplicate names in the columns.dtype : Type name or dict of column -> type, default NoneData type for data or columns. E.g. {'a': np.float64, 'b': np.int32}Use `str` or `object` together with suitable `na_values` settingsto preserve and not interpret dtype.If converters are specified, they will be applied INSTEADof dtype conversion.engine : {'c', 'python'}, optionalParser engine to use. The C engine is faster while the python engine iscurrently more feature-complete.converters : dict, default NoneDict of functions for converting values in certain columns. Keys can eitherbe integers or column labelstrue_values : list, default NoneValues to consider as Truefalse_values : list, default NoneValues to consider as Falseskipinitialspace : boolean, default FalseSkip spaces after delimiter.skiprows : list-like or integer or callable, default NoneLine numbers to skip (0-indexed) or number of lines to skip (int)at the start of the file.If callable, the callable function will be evaluated against the rowindices, returning True if the row should be skipped and False otherwise.An example of a valid callable argument would be ``lambda x: x in [0, 2]``.skipfooter : int, default 0Number of lines at bottom of file to skip (Unsupported with engine='c')nrows : int, default NoneNumber of rows of file to read. Useful for reading pieces of large filesna_values : scalar, str, list-like, or dict, default NoneAdditional strings to recognize as NA/NaN. If dict passed, specificper-column NA values.  By default the following values are interpreted asNaN: '', '#N/A', '#N/A N/A', '#NA', '-1.#IND', '-1.#QNAN', '-NaN', '-nan','1.#IND', '1.#QNAN', 'N/A', 'NA', 'NULL', 'NaN', 'n/a', 'nan','null'.keep_default_na : bool, default TrueWhether or not to include the default NaN values when parsing the data.Depending on whether `na_values` is passed in, the behavior is as follows:* If `keep_default_na` is True, and `na_values` are specified, `na_values`is appended to the default NaN values used for parsing.* If `keep_default_na` is True, and `na_values` are not specified, onlythe default NaN values are used for parsing.* If `keep_default_na` is False, and `na_values` are specified, onlythe NaN values specified `na_values` are used for parsing.* If `keep_default_na` is False, and `na_values` are not specified, nostrings will be parsed as NaN.Note that if `na_filter` is passed in as False, the `keep_default_na` and`na_values` parameters will be ignored.na_filter : boolean, default TrueDetect missing value markers (empty strings and the value of na_values). Indata without any NAs, passing na_filter=False can improve the performanceof reading a large fileverbose : boolean, default FalseIndicate number of NA values placed in non-numeric columnsskip_blank_lines : boolean, default TrueIf True, skip over blank lines rather than interpreting as NaN valuesparse_dates : boolean or list of ints or names or list of lists or dict, default False* boolean. If True -> try parsing the index.* list of ints or names. e.g. If [1, 2, 3] -> try parsing columns 1, 2, 3each as a separate date column.* list of lists. e.g.  If [[1, 3]] -> combine columns 1 and 3 and parse asa single date column.* dict, e.g. {'foo' : [1, 3]} -> parse columns 1, 3 as date and call result'foo'If a column or index contains an unparseable date, the entire column orindex will be returned unaltered as an object data type. For non-standarddatetime parsing, use ``pd.to_datetime`` after ``pd.read_csv``Note: A fast-path exists for iso8601-formatted dates.infer_datetime_format : boolean, default FalseIf True and `parse_dates` is enabled, pandas will attempt to infer theformat of the datetime strings in the columns, and if it can be inferred,switch to a faster method of parsing them. In some cases this can increasethe parsing speed by 5-10x.keep_date_col : boolean, default FalseIf True and `parse_dates` specifies combining multiple columns thenkeep the original columns.date_parser : function, default NoneFunction to use for converting a sequence of string columns to an array ofdatetime instances. The default uses ``dateutil.parser.parser`` to do theconversion. Pandas will try to call `date_parser` in three different ways,advancing to the next if an exception occurs: 1) Pass one or more arrays(as defined by `parse_dates`) as arguments; 2) concatenate (row-wise) thestring values from the columns defined by `parse_dates` into a single arrayand pass that; and 3) call `date_parser` once for each row using one ormore strings (corresponding to the columns defined by `parse_dates`) asarguments.dayfirst : boolean, default FalseDD/MM format dates, international and European formatiterator : boolean, default FalseReturn TextFileReader object for iteration or getting chunks with``get_chunk()``.chunksize : int, default NoneReturn TextFileReader object for iteration.See the `IO Tools docs<http://pandas.pydata.org/pandas-docs/stable/io.html#io-chunking>`_for more information on ``iterator`` and ``chunksize``.compression : {'infer', 'gzip', 'bz2', 'zip', 'xz', None}, default 'infer'For on-the-fly decompression of on-disk data. If 'infer' and`filepath_or_buffer` is path-like, then detect compression from thefollowing extensions: '.gz', '.bz2', '.zip', or '.xz' (otherwise nodecompression). If using 'zip', the ZIP file must contain only one datafile to be read in. Set to None for no decompression... versionadded:: 0.18.1 support for 'zip' and 'xz' compression.thousands : str, default NoneThousands separatordecimal : str, default '.'Character to recognize as decimal point (e.g. use ',' for European data).float_precision : string, default NoneSpecifies which converter the C engine should use for floating-pointvalues. The options are `None` for the ordinary converter,`high` for the high-precision converter, and `round_trip` for theround-trip converter.lineterminator : str (length 1), default NoneCharacter to break file into lines. Only valid with C parser.quotechar : str (length 1), optionalThe character used to denote the start and end of a quoted item. Quoteditems can include the delimiter and it will be ignored.quoting : int or csv.QUOTE_* instance, default 0Control field quoting behavior per ``csv.QUOTE_*`` constants. Use one ofQUOTE_MINIMAL (0), QUOTE_ALL (1), QUOTE_NONNUMERIC (2) or QUOTE_NONE (3).doublequote : boolean, default ``True``When quotechar is specified and quoting is not ``QUOTE_NONE``, indicatewhether or not to interpret two consecutive quotechar elements INSIDE afield as a single ``quotechar`` element.escapechar : str (length 1), default NoneOne-character string used to escape delimiter when quoting is QUOTE_NONE.comment : str, default NoneIndicates remainder of line should not be parsed. If found at the beginningof a line, the line will be ignored altogether. This parameter must be asingle character. Like empty lines (as long as ``skip_blank_lines=True``),fully commented lines are ignored by the parameter `header` but not by`skiprows`. For example, if ``comment='#'``, parsing``#empty\na,b,c\n1,2,3`` with ``header=0`` will result in 'a,b,c' beingtreated as the header.encoding : str, default NoneEncoding to use for UTF when reading/writing (ex. 'utf-8'). `List of Pythonstandard encodings<https://docs.python.org/3/library/codecs.html#standard-encodings>`_dialect : str or csv.Dialect instance, default NoneIf provided, this parameter will override values (default or not) for thefollowing parameters: `delimiter`, `doublequote`, `escapechar`,`skipinitialspace`, `quotechar`, and `quoting`. If it is necessary tooverride values, a ParserWarning will be issued. See csv.Dialectdocumentation for more details.tupleize_cols : boolean, default False.. deprecated:: 0.21.0This argument will be removed and will always convert to MultiIndexLeave a list of tuples on columns as is (default is to convert toa MultiIndex on the columns)error_bad_lines : boolean, default TrueLines with too many fields (e.g. a csv line with too many commas) will bydefault cause an exception to be raised, and no DataFrame will be returned.If False, then these "bad lines" will dropped from the DataFrame that isreturned.warn_bad_lines : boolean, default TrueIf error_bad_lines is False, and warn_bad_lines is True, a warning for each"bad line" will be output.low_memory : boolean, default TrueInternally process the file in chunks, resulting in lower memory usewhile parsing, but possibly mixed type inference.  To ensure no mixedtypes either set False, or specify the type with the `dtype` parameter.Note that the entire file is read into a single DataFrame regardless,use the `chunksize` or `iterator` parameter to return the data in chunks.(Only valid with C parser)memory_map : boolean, default FalseIf a filepath is provided for `filepath_or_buffer`, map the file objectdirectly onto memory and access the data directly from there. Using thisoption can improve performance because there is no longer any I/O overhead.Returns-------result : DataFrame or TextParserNone

2. DataFrame 数据结构的常用属性

# 默认显示前 5 条 csv 文件的数据,Pandas 会自动将 csv 的数据读取进来然后 jupyter notebooks 以表格的形式展现出来,十分直观
# head 函数可以使用参数,例如 head(3)表示只显示前三行的数据
data.head()# tail 函数默认显示最后 5 行数据,用于同 head 函数
data.tail()# columns 表示该 DataFrame 的列名(list 数据类型)
print(data.columns)
# shape 属性可以表示 DataFrame 数据的指标,第一个参数表示样本数量,第二个参数表示样本指标
print(data.shape)
Index(['NDB_No', 'Shrt_Desc', 'Water_(g)', 'Energ_Kcal', 'Protein_(g)','Lipid_Tot_(g)', 'Ash_(g)', 'Carbohydrt_(g)', 'Fiber_TD_(g)','Sugar_Tot_(g)', 'Calcium_(mg)', 'Iron_(mg)', 'Magnesium_(mg)','Phosphorus_(mg)', 'Potassium_(mg)', 'Sodium_(mg)', 'Zinc_(mg)','Copper_(mg)', 'Manganese_(mg)', 'Selenium_(mcg)', 'Vit_C_(mg)','Thiamin_(mg)', 'Riboflavin_(mg)', 'Niacin_(mg)', 'Vit_B6_(mg)','Vit_B12_(mcg)', 'Vit_A_IU', 'Vit_A_RAE', 'Vit_E_(mg)', 'Vit_D_mcg','Vit_D_IU', 'Vit_K_(mcg)', 'FA_Sat_(g)', 'FA_Mono_(g)', 'FA_Poly_(g)','Cholestrl_(mg)'],dtype='object')
(8618, 36)

2. Pandas 取数据

# Pandas 中取数据同样很简单,直接使用 loc 函数即可
print(data.loc[0])
info = data.loc[1]
print(info)
NDB_No                         1001
Shrt_Desc          BUTTER WITH SALT
Water_(g)                     15.87
Energ_Kcal                      717
Protein_(g)                    0.85
Lipid_Tot_(g)                 81.11
Ash_(g)                        2.11
Carbohydrt_(g)                 0.06
Fiber_TD_(g)                      0
Sugar_Tot_(g)                  0.06
Calcium_(mg)                     24
Iron_(mg)                      0.02
Magnesium_(mg)                    2
Phosphorus_(mg)                  24
Potassium_(mg)                   24
Sodium_(mg)                     643
Zinc_(mg)                      0.09
Copper_(mg)                       0
Manganese_(mg)                    0
Selenium_(mcg)                    1
Vit_C_(mg)                        0
Thiamin_(mg)                  0.005
Riboflavin_(mg)               0.034
Niacin_(mg)                   0.042
Vit_B6_(mg)                   0.003
Vit_B12_(mcg)                  0.17
Vit_A_IU                       2499
Vit_A_RAE                       684
Vit_E_(mg)                     2.32
Vit_D_mcg                       1.5
Vit_D_IU                         60
Vit_K_(mcg)                       7
FA_Sat_(g)                   51.368
FA_Mono_(g)                  21.021
FA_Poly_(g)                   3.043
Cholestrl_(mg)                  215
Name: 0, dtype: object
NDB_No                                 1002
Shrt_Desc          BUTTER WHIPPED WITH SALT
Water_(g)                             15.87
Energ_Kcal                              717
Protein_(g)                            0.85
Lipid_Tot_(g)                         81.11
Ash_(g)                                2.11
Carbohydrt_(g)                         0.06
Fiber_TD_(g)                              0
Sugar_Tot_(g)                          0.06
Calcium_(mg)                             24
Iron_(mg)                              0.16
Magnesium_(mg)                            2
Phosphorus_(mg)                          23
Potassium_(mg)                           26
Sodium_(mg)                             659
Zinc_(mg)                              0.05
Copper_(mg)                           0.016
Manganese_(mg)                        0.004
Selenium_(mcg)                            1
Vit_C_(mg)                                0
Thiamin_(mg)                          0.005
Riboflavin_(mg)                       0.034
Niacin_(mg)                           0.042
Vit_B6_(mg)                           0.003
Vit_B12_(mcg)                          0.13
Vit_A_IU                               2499
Vit_A_RAE                               684
Vit_E_(mg)                             2.32
Vit_D_mcg                               1.5
Vit_D_IU                                 60
Vit_K_(mcg)                               7
FA_Sat_(g)                           50.489
FA_Mono_(g)                          23.426
FA_Poly_(g)                           3.012
Cholestrl_(mg)                          219
Name: 1, dtype: object

3. Pandas 数据切片

# 这里的索引注意:首尾都可以取到~
info = data.loc[3:5]
info# 取任意索引位置的值,需要传入列表
index = [0,3,2]
info = data.loc[index]
info
NDB_No Shrt_Desc Water_(g) Energ_Kcal Protein_(g) Lipid_Tot_(g) Ash_(g) Carbohydrt_(g) Fiber_TD_(g) Sugar_Tot_(g) ... Vit_A_IU Vit_A_RAE Vit_E_(mg) Vit_D_mcg Vit_D_IU Vit_K_(mcg) FA_Sat_(g) FA_Mono_(g) FA_Poly_(g) Cholestrl_(mg)
0 1001 BUTTER WITH SALT 15.87 717 0.85 81.11 2.11 0.06 0.0 0.06 ... 2499.0 684.0 2.32 1.5 60.0 7.0 51.368 21.021 3.043 215.0
3 1004 CHEESE BLUE 42.41 353 21.40 28.74 5.11 2.34 0.0 0.50 ... 721.0 198.0 0.25 0.5 21.0 2.4 18.669 7.778 0.800 75.0
2 1003 BUTTER OIL ANHYDROUS 0.24 876 0.28 99.48 0.00 0.00 0.0 0.00 ... 3069.0 840.0 2.80 1.8 73.0 8.6 61.924 28.732 3.694 256.0

3 rows × 36 columns

4. 按列取值(很重要)

# 直接可以输入列表来数据所有该列的值
info = data['NDB_No']
info
info =  data[['NDB_No','Copper_(mg)']]
info
NDB_No Copper_(mg)
0 1001 0.000
1 1002 0.016
2 1003 0.001
3 1004 0.040
4 1005 0.024
5 1006 0.019
6 1007 0.021
7 1008 0.024
8 1009 0.056
9 1010 0.042
10 1011 0.042
11 1012 0.029
12 1013 0.040
13 1014 0.030
14 1015 0.033
15 1016 0.028
16 1017 0.019
17 1018 0.036
18 1019 0.032
19 1020 0.025
20 1021 0.080
21 1022 0.036
22 1023 0.032
23 1024 0.021
24 1025 0.032
25 1026 0.011
26 1027 0.022
27 1028 0.025
28 1029 0.034
29 1030 0.031
... ... ...
8588 43544 0.377
8589 43546 0.040
8590 43550 0.030
8591 43566 0.116
8592 43570 0.200
8593 43572 0.545
8594 43585 0.035
8595 43589 0.027
8596 43595 0.100
8597 43597 0.027
8598 43598 0.000
8599 44005 0.000
8600 44018 0.037
8601 44048 0.026
8602 44055 0.571
8603 44061 0.838
8604 44074 0.028
8605 44110 0.023
8606 44158 0.112
8607 44203 0.020
8608 44258 0.854
8609 44259 0.040
8610 44260 0.038
8611 48052 0.182
8612 80200 0.250
8613 83110 0.100
8614 90240 0.033
8615 90480 0.020
8616 90560 0.400
8617 93600 0.250

8618 rows × 2 columns

5. 按列过滤

col = data.columns.tolist()
print(col)filter_col = []
for i in col:if i.endswith('(g)'):filter_col.append(i)
filter_data = data[filter_col]
print(filter_data.head(3))
['NDB_No', 'Shrt_Desc', 'Water_(g)', 'Energ_Kcal', 'Protein_(g)', 'Lipid_Tot_(g)', 'Ash_(g)', 'Carbohydrt_(g)', 'Fiber_TD_(g)', 'Sugar_Tot_(g)', 'Calcium_(mg)', 'Iron_(mg)', 'Magnesium_(mg)', 'Phosphorus_(mg)', 'Potassium_(mg)', 'Sodium_(mg)', 'Zinc_(mg)', 'Copper_(mg)', 'Manganese_(mg)', 'Selenium_(mcg)', 'Vit_C_(mg)', 'Thiamin_(mg)', 'Riboflavin_(mg)', 'Niacin_(mg)', 'Vit_B6_(mg)', 'Vit_B12_(mcg)', 'Vit_A_IU', 'Vit_A_RAE', 'Vit_E_(mg)', 'Vit_D_mcg', 'Vit_D_IU', 'Vit_K_(mcg)', 'FA_Sat_(g)', 'FA_Mono_(g)', 'FA_Poly_(g)', 'Cholestrl_(mg)']Water_(g)  Protein_(g)  Lipid_Tot_(g)  Ash_(g)  Carbohydrt_(g)  \
0      15.87         0.85          81.11     2.11            0.06
1      15.87         0.85          81.11     2.11            0.06
2       0.24         0.28          99.48     0.00            0.00   Fiber_TD_(g)  Sugar_Tot_(g)  FA_Sat_(g)  FA_Mono_(g)  FA_Poly_(g)
0           0.0           0.06      51.368       21.021        3.043
1           0.0           0.06      50.489       23.426        3.012
2           0.0           0.00      61.924       28.732        3.694

6. 简单列数据处理

# 每列数据都会按照指定的操作依次进行,例如下面的每个数据都会被 /1000
info = data['Iron_(mg)']
g_info = info/1000
print(g_info)
print(g_info[0:3])
0       0.00002
1       0.00016
2       0.00000
3       0.00031
4       0.00043
5       0.00050
6       0.00033
7       0.00064
8       0.00016
9       0.00021
10      0.00076
11      0.00007
12      0.00016
13      0.00015
14      0.00013
15      0.00014
16      0.00038
17      0.00044
18      0.00065
19      0.00023
20      0.00052
21      0.00024
22      0.00017
23      0.00013
24      0.00072
25      0.00044
26      0.00020
27      0.00022
28      0.00023
29      0.00041...
8588    0.00900
8589    0.00030
8590    0.00010
8591    0.00163
8592    0.03482
8593    0.00228
8594    0.00017
8595    0.00017
8596    0.00486
8597    0.00025
8598    0.00023
8599    0.00013
8600    0.00011
8601    0.00068
8602    0.00783
8603    0.00311
8604    0.00030
8605    0.00018
8606    0.00080
8607    0.00004
8608    0.00387
8609    0.00005
8610    0.00038
8611    0.00520
8612    0.00150
8613    0.00140
8614    0.00058
8615    0.00360
8616    0.00350
8617    0.00140
Name: Iron_(mg), Length: 8618, dtype: float64
0    0.00002
1    0.00016
2    0.00000
Name: Iron_(mg), dtype: float64

7. 类组合并添加到原 DataFrame

# 可以很方便的对列数据进行切片处理
print(data['Water_(g)'][:2])
print(data['Energ_Kcal'][:2])# 样本数量相同,很容易进行列和列之间的加成乘除操作,每列的每个元素和另外列的对应元素进行操作
info = data['Water_(g)']*data['Energ_Kcal']
print(info[:2])# 通过打印 DataFrame 的样本量和指标量来确保新添加指标成功
print(data.shape)
data['new_info'] = info
print(data.shape)
0    15.87
1    15.87
Name: Water_(g), dtype: float64
0    717
1    717
Name: Energ_Kcal, dtype: int64
0    11378.79
1    11378.79
dtype: float64
(8618, 36)
(8618, 37)

8. 最值计算

max_energ_kcal = data['Energ_Kcal'].max()
max_energ_kcal
902

Cris 的 Python 数据分析笔记 05:Pandas 数据读取,索引,切片,计算,列整合,过滤,最值相关推荐

  1. Cris 的 Python 数据分析笔记 06:Pandas 常见的数据预处理

    文章目录 1. Pandas 对指定列排序 2. 泰坦尼克经典入门案例 3. Pandas 常用数据预处理函数 3.1 缺失值处理 3.2 Pandas 预处理函数自动过滤缺失值 3.3 手动来计算每 ...

  2. python 新建一列_python – 如何在迭代pandas数据框时创建新列并插入行值

    我正在尝试创建一个逐行迭代pandas数据帧的函数.我想基于其他列的行值创建一个新列.我的原始数据框可能如下所示: df: A B 0 1 2 1 3 4 2 2 2 现在我想在每个索引位置创建一个填 ...

  3. Cris 的 Python 数据分析笔记 08:NumPy 和 Pandas 整理脑图

    序 张爱玲说:忘记一个人有两种方式,一是时间,二是新欢

  4. Cris 的 Python 数据分析笔记 01:NumPy 基本知识

    01. NumPy基本知识 文章目录 01. NumPy基本知识 1. numpy 的第一个函数 genfromtxt 2. numpy 的第二个函数 array 3. numpy 的第三个函数 sh ...

  5. Cris 的 Python 数据分析笔记 04:NumPy 矩阵的复制,排序,拓展

    04. 矩阵的复制,排序,拓展 文章目录 04. 矩阵的复制,排序,拓展 1. NumPy 的引用问题 2. 浅复制 3 深复制 4. 索引求最值 5. title 扩展 6. sort 排序 1. ...

  6. 【Pandas 数据分析3-2】Pandas 数据读取与输出 - Excel

    目录 3.3 读取Excel 3.3.1 语法 3.3.2 文件内容 3.3.3 表格 3.3.4 表头 3.3.5 列名 3.3.6 其他 3.4 数据输出 3.4.1 CSV 3.4.2 Exce ...

  7. Python数据分析笔记——Numpy、Pandas库

    Python数据分析--Numpy.Pandas库 总第48篇 ▼ 利用Python进行数据分析中有两个重要的库是Numpy和Pandas,本章将围绕这两个库进行展开介绍. Numpy库 Numpy最 ...

  8. Python 数据分析三剑客之 Pandas(十):数据读写

    CSDN 课程推荐:<迈向数据科学家:带你玩转Python数据分析>,讲师齐伟,苏州研途教育科技有限公司CTO,苏州大学应用统计专业硕士生指导委员会委员:已出版<跟老齐学Python ...

  9. Python 数据分析三剑客之 Pandas(八):数据重塑、重复数据处理与数据替换

    CSDN 课程推荐:<迈向数据科学家:带你玩转Python数据分析>,讲师齐伟,苏州研途教育科技有限公司CTO,苏州大学应用统计专业硕士生指导委员会委员:已出版<跟老齐学Python ...

最新文章

  1. 网络编程 UDP通信的过程 TCP通信过程 多线程文件上传
  2. 高中计算机基础知识,高中计算机会考基本知识点
  3. 主设备号与次设备号以及申请
  4. 面试精讲之面试考点及大厂真题 - 分布式专栏 10 Redis雪崩,穿透,击穿三连问
  5. [ CQOI 2014 ] 数三角形
  6. Openstack之路(四)计算服务Nova
  7. 什么是 SNMP 和 MIB什么是 SNMP 和 MIB
  8. android禁止录屏后键盘,怎样取消华为按键录屏功能 | 手游网游页游攻略大全
  9. android手机邮件Exchange账户的设置
  10. vue/uniapp 百度统计埋点
  11. 202. 快乐数 (Python 实现)
  12. 《结构思考力》- 书摘整理
  13. 电容屏物体识别_触摸屏物体识别到底是怎么实现的
  14. 如何查看自己是否被微信好友删除
  15. Python实现视频转 gif 动图
  16. 玩转华为数据中心交换机系列 | 汇总
  17. [ECE]模拟试题-6
  18. 13年android手机top,2013年1月安卓热门机型Top20
  19. 大型 web 前端架构设计-面向抽象编程入门
  20. AMD HD7850 4G显卡刷Bios验真伪(二)

热门文章

  1. ML之PDP:机器学习可解释性之部分依赖图(Partial Dependence Plots)之每个特征如何影响您的预测?
  2. 分享一个二维码生成的接口,简单好用
  3. win xp出现“安装程序包的语言不受支持”的解决
  4. 我的高德地图之定位,Marker,位置信息。
  5. 是谁在撩动着我的服务器
  6. 服务器托管的必要性(下)
  7. 关于查看虚拟机防火墙 状态和关闭防火墙(CentOS7)
  8. 三维重建、视觉定位、传感器位置推算,滴滴 AR 实景导航技术详解
  9. 高通骁龙X55 5G modem信息图和骁龙X55调制解调器简介
  10. 响铃:“新品牌计划”出炉,但拼多多要的不只是C2M