Humanizing Differences: Making Time Intervals More Readable with Pendulum Timezone Hopping with Pendulum: Seamlessly Manage Time across Different Timezones Parsing Time with Pendulum: Simplify Your Date and Time Operations HELP! Libraries to Make Python Development Easier Time Travel in Python: Adding and Subtracting Time Exploring Timezones in Python's Datetime Module Understanding now in Python's Datetime Module Exploring Datetime Components in Python Working with Datetime Components and Current Time in Python Leveraging the Power of namedtuples in Python Unleashing the Power of namedtuple in Python Harnessing the Power of OrderedDict's Advanced Features in Python Maintaining Dictionary Order with OrderedDict in Python Advanced Usage of defaultdict in Python for Flexible Data Handling Working with Dictionaries of Unknown Structure using defaultdict in Python Understanding the Counter Class in Python: Simplify Counting and Frequency Analysis Exploring the Collections Module in Python: Enhance Data Structures and Operations Counting Made Easy in Python: Harness the Power of Counting Techniques Creating a Dictionary from a File in Python: Simplify Data Mapping and Access Working with CSV Files in Python: Simplify Data Processing and Analysis Checking Dictionaries for Data: Effective Data Validation in Python Working with Dictionaries More Pythonically: Efficient Data Manipulation Popping and Deleting from Python Dictionaries: Managing Key-Value Removal Adding and Extending Python Dictionaries: Flexible Data Manipulation Dictionaries-Working with Nested Data in Python: Exploring Hierarchical Structures Safely Finding Values in Python Dictionaries: Advanced Techniques for Key Lookup Safely Finding Values in Python Dictionaries: A Guide to Avoiding Key Errors Creating and Looping Through Dictionaries in Python: A Comprehensive Guide Exploring Dictionaries in Python: A Key-Value Data Structure Set Operations in Python: Unveiling Differences among Sets Exploring Set Operations in Python: Uncovering Similarities among Sets Removing Data from Sets in Python: Streamlining Set Operations Modifying Sets in Python: Adding and Removing Elements with Ease Creating Sets in Python: Harnessing the Power of Unique Collections Set Sets for Unordered and Unique Data with Tuples in Python Enumerating positions More Unpacking in Loops Zipping and Unpacking Tuples Iterating and Sorting Finding and Removing Elements in a List Combining Lists Lists Introduction Datatypes Django Software engineering concepts Python, data science, & software engineering Using persistence Repeated reads & performance Dask DataFrame pipelines Merging DataFrames Plucking values JSON Files into Dask Bags Using json module JSON data files Functional Approaches Using .str & string methods Functional Approaches Using dask.bag.filter Functional Approaches Using dask.bag.map Functional programming Using Filter Functional programming Using map Functional programming Functional Approaches using Dask Bags Using Python's glob module Glob expressions Reading text files Sequences to bags Building Dask Bags & Globbing Is Dask or Pandas appropriate? Timing I-O & computation: Pandas Timing DataFrame Operations Compatibility with Pandas API Building delayed pipelines Reading multiple CSV files For Dask DataFrames Reading CSV For Dask DataFrames Using Dask DataFrames Putting array blocks together for Analyzing Earthquake Data Stacking two-dimensional arrays for Analyzing Earthquake Data Stacking one-dimensional arrays for Analyzing Earthquake Data Stacking arrays for Analyzing Earthquake Data Producing a visualization of data_dask for Analyzing Earthquake Data Aggregating while ignoring NaNs for Analyzing Earthquake Data Extracting Dask array from HDF5 for Analyzing Earthquake Data Using HDF5 files for analyzing earthquake data Analyzing Earthquake Data Putting array blocks together Stacking two-dimensional arrays Stacking one-dimensional arrays Stacking arrays Producing a visualization of data_dask Aggregating while ignoring NaNs Extracting Dask array from HDF5 HDF5 format (Hierarchical Data Format version 5) Connecting with Dask Broadcasting rules Aggregating multidimensional arrays Indexing in multiple dimensions Using reshape: Row- & column-major ordering Reshaping: Getting the order correct! Reshaping time series data A Numpy array of time series data Computing with Multidimensional Arrays Timing array computations Dask array methods/attributes Aggregating with Dask arrays Aggregating in chunks Working with Dask arrays Working with Numpy arrays Chunking Arrays in Dask Computing fraction of long trips with `delayed` functions Aggregating with delayed Functions Deferring Computation with Loops Using decorator @-notation Renaming decorated functions Visualizing a task graph Deferring computation with `delayed` Composing functions Delaying Computation with Dask Computing the fraction of long trips Aggregating with Generators Examining a sample DataFrame Reading many files Examining consumed generators Filtering & summing with generators Filtering in a list comprehension Managing Data with Generators Plotting the filtered results Using pd.concat() Chunking & filtering together Filtering a chunk Examining a chunk Using pd.read_csv() with chunksize Querying DataFrame memory usage Querying array memory Usage Allocating memory for a computation Allocating memory for an array Querying Python interpreter's memory usage Timeout(): a real world example A decorator factory run_n_times() Decorators that take arguments Access to the original function The timer decorator Decorators and metadata When to use decorators with timer() Using timer() Time a function The double_args decorator decorator look like Decorators Definitions - nonlocal variables Definitions - nested function Closures and overwriting Closures and deletion Attaching nonlocal variables to nested functions The nonlocal keyword The global keyword Functions as return values Defining a function inside another function Functions as arguments Referencing a function Lists and dictionaries of functions Functions as variables Functions as objects Handling errors Two ways to define a context manager Nested contexts The yield keyword Using context managers Immutable or Mutable Pass by assignment Don't repeat yourself (DRY) Docstring formats Docstrings A Classy Spider Crawl Text Extraction Selectors with CSS Attributes in CSS CSS Locators Extracting Data from a SelectorList Selecting Selectors Setting up a Selector Introduction to the scrapy Selector Slashes and Brackets in web scrapping Web Scraping With Python Negative look-behind Positive look-behind Look-behind Negative look-ahead Positive look-ahead Look-ahead Lookaround Named groups Numbered groups Backreferences Non-capturing groups Pipe re module Grouping and capturing re module Greedy vs. nongreedy matching OR operand in re module OR operator in re Module Special characters Regex metacharacters Quantifiers in re module Repeated characters Supported metacharacters The re module Substitution Template method Calling functions Inline operations Escape sequences Index lookups Type conversion Formatted string literal f-strings Formatting datetime Format specifier Named placeholders Reordering values Methods for formatting string formatting Positional formatting Replacing substrings Counting occurrences Index function Finding substrings Finding and replacing Stripping characters Joining Splitting Adjusting cases String operations Stride Slicing Indexing Concatenation Introduction to string manipulation All parts of Pandas All datetime operations in Pandas Timezones in Pandas Additional datetime methods in Pandas Summarizing datetime data in pandas Timezone-aware arithmetic Loading datetimes with parse_dates Reading date and time data in Pandas Ending Daylight Saving Time Starting Daylight Saving Time Time zone database Adjusting timezone vs changing tzinfo UTC offsets Negative timedeltas Creating timedeltas Working with durations Parsing datetimes with strptime Printing datetimes Replacing parts of a datetime Adding time to the mix Format strftime ISO 8601 format with Exmples Turning dates into strings Incrementing variables += Math with Dates Finding the weekday of a date Attributes of a date Dates in Python pandas .apply() method Iterating with .itertuples() .itertuples() Iterating with .iterrows() Iterating with .iloc Adding win percentage to DataFrame Calculating win percentage Introduction to pandas DataFrame iteration Using holistic conversions Moving calculations above a loop Eliminate loops with NumPy Beneifits of eleiminating loops Uniques with sets Set method union Set method symmetric difference Set method difference Comparing objects with loops itertools.combinations() Combinations with loop The itertools module collections.Counter() Counting with loop Combining objects with zip Combining objects Efficiently Combining, Counting, and iterating %mprun output Code profilling for memory usage %lprun output Code profiling for runtime Comparing times Saving output Using timeit in cell magic mode Using timeit in line magic mode Specifying number loops timeit output Using timeit Why should we time our code? NumPy array boolean indexing NumPy array broadcasting The power of NumPy arrays with Efficient Code Built-in function: map() with Efficient Code Built-in function: enumerate() with Efficient Code Built-in function: range() with Efficient Code Building with builtins Using pandas read_csv iterator for streaming data Build a generator function Generators for the large data limit Using generator function Build generator function Conditionals in generator expressions List comprehensions vs. generators Generator expressions Dict comprehensions Conditionals in comprehensions Nested loops List comprehension with range() For loop And List Comprehension A list comprehension Populate a list with a for loop Iterating over data Loading data in chunks Using iterators to load large files into memory Print zip with asterisk zip() and unpack Using zip() enumerate() and unpack Using enumerate() Iterating with file connections Iterating with dictionaries Iterating at once with asterisk Iterating over iterables: next() Iterators vs. iterables Iterating with a for loop What is iterate Errors and exceptions - 2 Errors and exceptions Passing invalid arguments Passing valid arguments Passing an incorrect argument The float() function Introduction to error handling Anonymous functions Lambda functions Default and flexible arguments Using nonlocal Returning functions Nested functions Global vs. local scope Basic ingredients of a function Multiple Parameters and Return Values Docstrings Return values from functions Function parameters Defining a function Built-in functions DataFrame manipulation Dictionary of lists - by column List of dictionaries - by row Replacing missing values Removing missing values Plotting missing values Counting missing values Detecting any missing values Detecting any missing values with .isna().any() Detecting missing values Missing values Avocados Plot with Transparency Plot with Legend Layering plots Scatter plots Rotating axis labels Line plots Bar plots Histograms Visualizing data Calculating summary stats across columns The axis argument Slicing - .loc[] + slicing is a power combo Subsetting by row/column number Slicing by partial dates Slicing by dates Slice twice Slicing columns Slicing the inner index levels correctly Slicing the inner index levels badly Slicing the outer index level Sort the index before slice Slicing lists Explicit indexes Summing with pivot tables Filling missing values in pivot tables Pivot on two variables Multiple statistics in pivot table Different statistics in a pivot table Group by to pivot table Pivot tables Many groups, many summaries Grouping by multiple variables Multiple grouped summaries Summaries by group Dropping duplicate pairs Dropping duplicate names Cumulative statistics Cumulative sum Multiple summaries Summaries on multiple columns The .agg() method Summarizing dates Summarizing numerical data Summary statistics DataFrame With CSV File Creating DataFrames with Dictionaries in Pandas Creating DataFrames with Pandas Data Manipulation with Pandas Parsing time with pendulum TimeDelta - Time Travel with timedelta TimeZone in Action DateTime Components From String to datetime namedtuple is a powerful tool OrderedDict power feature - subclass most_common() - collections module Counter built-in class Working With CSV get() is a built-in method Data Types For Data Science __import__() zip() vars() type() tuple() super() sum() str() staticmethod() sorted() slice() setattr() set() round() reversed() repr() range() property() print() pow() ord() open() oct() object() next() min() memoryview() function memoryview() max() map() locals() list() len() iter() issubclass() isinstance() int() input() id() hex() help() hash() hasattr() globals() getattr() frozenset() format() float() filter() exec() eval() enumerate() divmod() dir() dict() delattr() complex() compile() classmethod() chr() callable() bytes() bytearray() breakpoint() bool() bin() ascii() anext() any() all() aiter() abs() Python Efficient Code Why Python is best for Data Sciences Python

DataFrame With CSV File

Reading a CSV file and performing data manipulation is a common task in data analysis and machine learning. Here are maximum steps to help you get started:

servers_info.csv

server_name location os_version hd ram date
server1 New York 2016 100 16 2019-07-01
server2 London 2012 150 8 2019-07-02
server3 Paris 2010 120 32 2019-07-03
server4 Miami 2019 100 16 2019-07-04
server5 Liverpool 2016 300 6 2019-07-05
server6 London 2016 100 16 2019-07-01
server7 Amsterdum 2012 150 8 2019-07-02
server8 Munich 2010 120 32 2019-07-03
server9 Berlin 2019 100 16 2019-07-04
server10 New York 2016 300 6 2019-07-05

 

A DataFrame is a two-dimensional labeled data structure with columns of potentially different types. A CSV file is a type of text file used to store tabular data, where each row represents a record and each column represents a field.

In Python, the Pandas library provides a convenient way to read CSV files into DataFrames. Here's how you can create a DataFrame with a CSV file:

import pandas as pd
# Specify the file path
file_path = r'C:\\servers_info.csv'
# Read the CSV file and create a data frame
df = pd.read_csv(file_path)
# Display the data frame
print(df)

Output:

server_name   location  os_version   hd  ram        date
0     server1   New York        2016  100   16  2019-07-01
1     server2     London        2012  150    8  2019-07-02
2     server3      Paris        2010  120   32  2019-07-03
3     server4      Miami        2019  100   16  2019-07-04
4     server5  Liverpool        2016  300    6  2019-07-05
5     server6     London        2016  100   16  2019-07-01
6     server7  Amsterdum        2012  150    8  2019-07-02
7     server8     Munich        2010  120   32  2019-07-03
8     server9     Berlin        2019  100   16  2019-07-04
9    server10   New York        2016  300    6  2019-07-05

 

print(df.head())

  server_name   location  os_version   hd  ram        date
0     server1   New York        2016  100   16  2019-07-01
1     server2     London        2012  150    8  2019-07-02
2     server3      Paris        2010  120   32  2019-07-03
3     server4      Miami        2019  100   16  2019-07-04
4     server5  Liverpool        2016  300    6  2019-07-05

head() is a method used in Python pandas library to view the first few rows of a DataFrame or Series. By default, it displays the first 5 rows, but you can specify the number of rows you want to see by passing an integer argument to the method. This method is useful for quickly inspecting the data in a DataFrame or Series and getting a sense of its structure and content.

print(df.info())

<class 'pandas.core.frame.DataFrame'>  

RangeIndex: 10 entries, 0 to 9
Data columns (total 6 columns):
 #   Column       Non-Null Count  Dtype
---  ------       --------------  -----
 0   server_name  10 non-null     object
 1   location     10 non-null     object
 2   os_version   10 non-null     int64
 3   hd           10 non-null     int64
 4   ram          10 non-null     int64
 5   date         10 non-null     object
dtypes: int64(3), object(3)
memory usage: 608.0+ bytes
 

.info() is a method in the pandas library in Python used to obtain a summary of a DataFrame or Series. It provides information about the data type of each column, the number of non-null values, and the memory usage of the DataFrame. This method is useful for understanding the structure and content of a dataset, as well as identifying missing or incorrect data. It can also be used to optimize memory usage by identifying opportunities to convert data types or remove unnecessary columns.

print(df.shape)

.shape is an attribute in the pandas library in Python used to return the dimensions (number of rows and columns) of a DataFrame or Series as a tuple. For a DataFrame, the first element of the tuple represents the number of rows, while the second element represents the number of columns. For a Series, the tuple contains only one element, representing the length of the Series. This attribute is useful for quickly checking the size of a dataset and for performing operations that require knowledge of the dataset's dimensions, such as reshaping, concatenation, or indexing.

(10, 6)
print(df.describe())
        os_version          hd        ram
count    10.000000   10.000000  10.000000
mean   2014.600000  154.000000  15.600000
std       3.373096   79.330532   9.651713
min    2010.000000  100.000000   6.000000
25%    2012.000000  100.000000   8.000000
50%    2016.000000  120.000000  16.000000
75%    2016.000000  150.000000  16.000000

.describe() is a method in the pandas library in Python used to generate descriptive statistics of a DataFrame or Series. It provides information about the central tendency, dispersion, and shape of the dataset's distribution, including count, mean, standard deviation, minimum, maximum, and quartiles. This method is useful for gaining insights into the data and identifying any outliers or anomalies. It can also be used to compare different datasets or subsets of data to understand how they differ in terms of their distribution and summary statistics.

print(df.values)
[['server1' 'New York' 2016 100 16 '2019-07-01']
 ['server2' 'London' 2012 150 8 '2019-07-02']
 ['server3' 'Paris' 2010 120 32 '2019-07-03']
 ['server4' 'Miami' 2019 100 16 '2019-07-04']
 ['server5' 'Liverpool' 2016 300 6 '2019-07-05']
 ['server6' 'London' 2016 100 16 '2019-07-01']
 ['server7' 'Amsterdum' 2012 150 8 '2019-07-02']
 ['server8' 'Munich' 2010 120 32 '2019-07-03']
 ['server9' 'Berlin' 2019 100 16 '2019-07-04']
 ['server10' 'New York' 2016 300 6 '2019-07-05']]

.values is a method in the pandas library in Python used to return a NumPy array of the values in a DataFrame or Series. It returns a 2D NumPy array for a DataFrame and a 1D NumPy array for a Series. This method is useful for converting a pandas DataFrame or Series to a NumPy array for compatibility with other libraries and functions that accept NumPy arrays. It can also be used to access and manipulate the underlying data in a DataFrame or Series directly, without having to work through the pandas library. However, it is generally recommended to use pandas methods for data manipulation and analysis, as they provide a more convenient and efficient interface for working with structured data.

print(df.columns)
Index(['server_name', 'location', 'os_version', 'hd', 'ram', 'date'], dtype='object')

.columns is an attribute in the pandas library in Python used to return a list of the column names in a DataFrame. It is a useful method for quickly inspecting the column names of a dataset and for selecting specific columns for further analysis. It can also be used to rename columns by assigning a new list of column names to the attribute. For example, if you have a DataFrame called df, you can rename its columns by assigning a list of new column names to df.columns.

print(df.index)

RangeIndex(start=0, stop=10, step=1)

.index is an attribute in the pandas library in Python used to return the index labels of a DataFrame or Series. For a DataFrame, the index labels represent the row labels, while for a Series, the index labels represent the labels for each element in the Series. This attribute is useful for accessing and manipulating the index labels directly, such as selecting specific rows or reordering the rows based on the index labels. It can also be used to set a new index by assigning a new index object to the attribute, for example, by assigning a list of new index labels to df.index .

print(df.sort_values("server_name"))

use sortvalues with .head()

print(df.sort_values("server_name").head())
  server_name  location  os_version   hd  ram        date
0     server1  New York        2016  100   16  2019-07-01
9    server10  New York        2016  300    6  2019-07-05
1     server2    London        2012  150    8  2019-07-02
2     server3     Paris        2010  120   32  2019-07-03
3     server4     Miami        2019  100   16  2019-07-04

.sort_values() is a method in the pandas library in Python used to sort a DataFrame or Series by one or more columns. It can be used to sort the data in ascending or descending order, and can handle missing values in various ways. By default, it sorts the DataFrame or Series in ascending order based on the values in the specified column(s). It can also sort based on multiple columns by passing a list of column names to the method. This method is useful for quickly reordering and prioritizing the data in a dataset based on specific criteria. It can also be used to identify patterns and relationships in the data by sorting on different columns and examining the resulting patterns.

Sorting in descending order

print(df.sort_values("server_name", ascending=False).head())
  server_name   location  os_version   hd  ram        date
8     server9     Berlin        2019  100   16  2019-07-04
7     server8     Munich        2010  120   32  2019-07-03
6     server7  Amsterdum        2012  150    8  2019-07-02
5     server6     London        2016  100   16  2019-07-01
4     server5  Liverpool        2016  300    6  2019-07-05

.sort_values(ascending=False) is a method in the pandas library in Python used to sort a DataFrame or Series in descending order based on one or more columns. It works the same as the .sort_values() method, but with the addition of the ascending=False parameter, which reverses the default ascending order to descending order. By default, it sorts the DataFrame or Series in descending order based on the values in the specified column(s). It can also sort based on multiple columns by passing a list of column names to the method. This method is useful for quickly reordering and prioritizing the data in a dataset based on specific criteria in descending order. It can also be used to identify patterns and relationships in the data by sorting on different columns and examining the resulting patterns in descending order.

Sorting by multiple variables

print(df.sort_values(["server_name", "location"]).head())

server_name  location  os_version   hd  ram        date
0     server1  New York        2016  100   16  2019-07-01
9    server10  New York        2016  300    6  2019-07-05
1     server2    London        2012  150    8  2019-07-02
2     server3     Paris        2010  120   32  2019-07-03
3     server4     Miami        2019  100   16  2019-07-04

.sort_values() with multiple variables is a method in the pandas library in Python used to sort a DataFrame or Series based on multiple columns. It can sort based on two or more columns by passing a list of column names to the method. The first column name in the list is the primary sort column, followed by the secondary, tertiary, and so on. If two or more rows have the same value in the primary sort column, then the method will sort those rows based on the secondary column, and so on. This method is useful for prioritizing and sorting the data in a dataset based on multiple criteria. It can be used to identify patterns and relationships in the data by sorting on different combinations of columns and examining the resulting patterns.

print(df.sort_values(["server_name", "location"], ascending=[True, False]).head())
  server_name  location  os_version   hd  ram        date
0     server1  New York        2016  100   16  2019-07-01
9    server10  New York        2016  300    6  2019-07-05
1     server2    London        2012  150    8  2019-07-02
2     server3     Paris        2010  120   32  2019-07-03
3     server4     Miami        2019  100   16  2019-07-04

Subsetting columns

print(df["server_name"])
0     server1
1     server2
2     server3
3     server4
4     server5

......

Subsetting columns is a technique in the pandas library in Python used to select and extract specific columns from a DataFrame or Series. It involves specifying the column name(s) or index location(s) of the desired column(s) within the square brackets of the DataFrame or Series object. For example, if you have a DataFrame called df with columns named "name", "age", and "gender", you can extract only the "name" and "age" columns by using the code df[['name', 'age']]. Alternatively, you can use the index location of the columns, such as df.iloc[:, 0:2], which would extract the first two columns of the DataFrame. This technique is useful for focusing on specific columns of interest and reducing the amount of data that needs to be processed, especially when working with large datasets

Subsetting multiple columns

print(df[["server_name","location"]])
Subsetting rows

print(df["ram"] > 8)

0     True
1    False
2     True
3     True
4    False
5     True

The command df["ram"] > 8 is used to create a boolean mask in a pandas DataFrame called df based on a condition, where the condition is checking if the values in the "ram" column are greater than 8.

Here's a brief description of the command:

  • df["ram"] refers to the "ram" column in the DataFrame df.
  • > 8 is a comparison operator that checks if the values in the "ram" column are greater than 8. This will result in a boolean mask with True values where the condition is met and False values where the condition is not met.

The resulting output of the command will be a series of boolean values, with True values corresponding to rows where the value in the "ram" column is greater than 8, and False values where the value is less than or equal to 8. This boolean mask can be used to subset or filter the original DataFrame to only include the rows where the condition is met, such as df[df["ram"] > 8].

Subsetting based on text data

 

Subsetting based on text data is a technique used in pandas to extract specific rows from a DataFrame based on the content of a text column. This involves using boolean indexing to create a boolean mask that checks if a specific text pattern or substring exists in a column of text data.

For example, if you have a DataFrame with a column of text data called "Name", you can extract only the rows where the "Name" column contains the word "John" by using the code df[df["Name"].str.contains("John")]. The str.contains() method searches for the specified text pattern within the "Name" column and returns a boolean mask indicating which rows match the condition. The resulting boolean mask is then used to subset the original DataFrame to only include the rows where the condition is met.

This technique is useful when working with text data and can be used to filter and extract specific subsets of data based on patterns or keywords within the text. Other string methods, such as str.startswith(), str.endswith(), and str.match(), can also be used for more specific string operations.

print(df[df["location"] == "New York"])

  server_name  location  os_version   hd  ram        date
0     server1  New York        2016  100   16  2019-07-01
9    server10  New York        2016  300    6  2019-07-05

The command df[df["location"] == "New York"] is used to subset a pandas DataFrame called df based on a condition, where the condition is checking if the values in the "location" column are equal to "New York".

Here's a brief description of the command:

  • df["location"] refers to the "location" column in the DataFrame df.
  • == "New York" is a comparison operator that checks if the values in the "location" column are equal to "New York". This will result in a boolean mask with True values where the condition is met and False values where the condition is not met.

The boolean mask is then used to subset the original DataFrame by using it inside square brackets, resulting in a new DataFrame that only includes the rows where the "location" column is equal to "New York". This command can be used to filter and focus on specific subsets of data within the original DataFrame.

Subsetting based on dates

Subsetting based on dates is a technique used in pandas to extract specific rows from a DataFrame based on date values in a column. This involves using boolean indexing to create a boolean mask that checks if a date falls within a certain range or satisfies a specific condition.

For example, if you have a DataFrame with a column of date data called "Date", you can extract only the rows where the "Date" column falls within a specific date range by using the code df[(df["Date"] >= start_date) & (df["Date"] <= end_date)]. Here, start_date and end_date are variables representing the start and end dates of the desired date range. The & operator is used to combine the two boolean conditions, and the resulting boolean mask is then used to subset the original DataFrame to only include the rows where the condition is met.

This technique is useful when working with time series data and can be used to filter and extract specific subsets of data based on date ranges or other date-related conditions. Other date/time functions, such as pd.to_datetime(), can also be used to convert strings or other data types to date values for use in date-based operations.

print(df[df["date"] < "2019-07-04"])

  server_name   location  os_version   hd  ram        date
0     server1   New York        2016  100   16  2019-07-01
1     server2     London        2012  150    8  2019-07-02
2     server3      Paris        2010  120   32  2019-07-03
5     server6     London        2016  100   16  2019-07-01
6     server7  Amsterdum        2012  150    8  2019-07-02
7     server8     Munich        2010  120   32  2019-07-03

Subsetting based on multiple conditions

Subsetting based on multiple conditions is a technique used in pandas to extract specific rows from a DataFrame that satisfy multiple criteria. This involves using boolean indexing to create a boolean mask that checks if a row satisfies two or more conditions.

For example, if you have a DataFrame with columns called "Gender" and "Age", you can extract only the rows where "Gender" is "Female" and "Age" is greater than or equal to 30 by using the code df[(df["Gender"] == "Female") & (df["Age"] >= 30)]. Here, the & operator is used to combine the two boolean conditions, and the resulting boolean mask is then used to subset the original DataFrame to only include the rows where both conditions are met.

This technique can be used to filter and extract specific subsets of data based on multiple criteria, which can be especially useful in large datasets where only a small subset of the data is of interest. Other logical operators, such as | for "or" conditions and ~ for negation, can also be used to create more complex boolean expressions for subsetting data.

is_name = df["server_name"] == "server1"

is_loc = df["location"] == "New York"

print(df[is_name & is_loc])

or

print(df[(df["server_name"] == "server1") & (df["location"] == "New York")])
  server_name  location  os_version   hd  ram        date
0     server1  New York        2016  100   16  2019-07-01

Subsetting using .isin()

Subsetting using .isin() is a technique used in pandas to extract specific rows from a DataFrame that match a set of values in a particular column. This involves using the .isin() method, which checks whether each element in a column is contained in a set of specified values and returns a boolean mask.

For example, if you have a DataFrame with a column of categorical data called "Fruit", you can extract only the rows where the "Fruit" column contains the values "Apple" or "Banana" by using the code df[df["Fruit"].isin(["Apple", "Banana"])]. The .isin() method checks whether each element in the "Fruit" column is contained in the set of specified values, which is ["Apple", "Banana"] in this case, and returns a boolean mask indicating which rows match the condition. The resulting boolean mask is then used to subset the original DataFrame to only include the rows where the condition is met.

This technique is useful when working with categorical data and can be used to filter and extract specific subsets of data based on a specific set of values in a column. Other similar methods, such as .str.contains() for text data and .between() for numerical data, can also be used to extract data based on more specific conditions.

loc = (df["location"].isin(["New York", "London"]))

print(df[loc])

  server_name  location  os_version   hd  ram        date
0     server1  New York        2016  100   16  2019-07-01
1     server2    London        2012  150    8  2019-07-02
5     server6    London        2016  100   16  2019-07-01
9    server10  New York        2016  300    6  2019-07-05

New columns

df["avg_new"] = df["ram"]/100

print(df)
  server_name   location  os_version   hd  ram        date  avg_new
0     server1   New York        2016  100   16  2019-07-01     0.16
1     server2     London        2012  150    8  2019-07-02     0.08
2     server3      Paris        2010  120   32  2019-07-03     0.32
3     server4      Miami        2019  100   16  2019-07-04     0.16
4     server5  Liverpool        2016  300    6  2019-07-05     0.06

Multiple manipulations

Multiple manipulations is a term used to describe the process of combining multiple data manipulation techniques in pandas to extract, transform, and analyze specific subsets of data from a larger dataset.

This often involves chaining multiple methods and functions together in a specific order to achieve the desired output. For example, you can use techniques such as filtering, grouping, aggregating, and sorting to extract specific subsets of data based on multiple conditions, and then transform or analyze the data using mathematical or statistical functions.

Multiple manipulations can be particularly useful when working with large datasets and complex data structures, as it allows you to extract and analyze specific subsets of data based on multiple criteria, and then transform or summarize the data to gain insights and make informed decisions. Pandas offers a wide range of built-in functions and methods to perform various data manipulations, and the ability to chain them together makes it a powerful tool for data analysis and manipulation.

i_ram = df[df["ram"] < 16]

i_loc = i_ram.sort_values("location", ascending=False)

print(i_loc[["server_name","location","ram"]])
  server_name   location  ram
9    server10   New York    6
1     server2     London    8
4     server5  Liverpool    6
6     server7  Amsterdum    8

The given code first creates a subset of the original DataFrame df called i_ram by using boolean indexing to extract only the rows where the "ram" column has a value less than 16.

The next line of code then creates a new DataFrame called i_loc by sorting the rows of i_ram in descending order based on the values in the "location" column using the .sort_values() method. Here, the ascending=False parameter is passed to sort the rows in descending order of the "location" column.

Overall, this code filters the original DataFrame based on a specific condition and then sorts the resulting subset based on a specific column to create a new DataFrame with only the desired rows and order.