Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

anachem lab experiments, Summaries of Analytical Chemistry

analytical chemistry experiments handouts

Typology: Summaries

2023/2024

Uploaded on 03/19/2024

ericka-ligaya-galvez
ericka-ligaya-galvez 🇵🇭

1 document

1 / 45

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
LRRAMOS
1
INTRODUCTION
DIVISIONS OF QUANTITATIVE ANALYSIS
A. Volumetric Analysis - measures the volume of a standard reacting reagent with
the desired constituent in a definite chemical reaction. This is commonly known
as “TITRATION”
B. Gravimetric Analysis - based entirely upon weight, the original substance is
weighed and from it the constituent to be determined is isolated as an element
or as a compound of definite composition
C. Colorimetric Method - the substance to be determined is converted to some
compound which imparts to its solution a distinct color, the intensity of which
varies in proportion to the amount of a compound in a solution
D. Other physicochemical Methods - quantities of constituents from measurements
of such properties as the turbidity od suspensions of semi colloidal precipitate
E. Gas Volumetric Methods - determination of mixture of gases
VOLUMETRIC ANALYSIS
Divisions
A. Neutralization methods - acidimetry (acid determination) and alkalimetry (alkali
determination
B. Oxidation and Reduction - “redox” - oxidimetry and reductimetry
C. Precipitation methods - precipitimetry
D. Complex formation methods - complexometry
DEFINITION OF TERMS
Titration - determination of the concentration (unknown) of a solution by the
gradual addition of a standard (known) solution
Titrant - solution of known concentration (reagent solution)
Analyte - substance under study (substance being titrated)
Standard solution - solution whose concentration is accurately known
a. Standard acid solutions - HCl, HNO3, H2SO4, H2C2O4
b. Standard alkali solutions - NaOH, KOH, Ba(OH)2
Standardization - process of determining the exact concentration of a titrant
Primary standard - a substance of highest purity used for standardization
a. Characteristics: highest purity, soluble in water, stable towards air, high
temperature and humidity
Example: potassium hydrogen phthalate(KHP) or KHC8H4O4, sodium
carbonate, sulfuric acid
MODULE 2:
INTRODUCTION TO QUANTITATIVE ANALYSIS
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b
pf1c
pf1d
pf1e
pf1f
pf20
pf21
pf22
pf23
pf24
pf25
pf26
pf27
pf28
pf29
pf2a
pf2b
pf2c
pf2d

Partial preview of the text

Download anachem lab experiments and more Summaries Analytical Chemistry in PDF only on Docsity!

LRRAMOS

INTRODUCTION

DIVISIONS OF QUANTITATIVE ANALYSIS

A. Volumetric Analysis - measures the volume of a standard reacting reagent with the desired constituent in a definite chemical reaction. This is commonly known as “TITRATION” B. Gravimetric Analysis - based entirely upon weight, the original substance is weighed and from it the constituent to be determined is isolated as an element or as a compound of definite composition C. Colorimetric Method - the substance to be determined is converted to some compound which imparts to its solution a distinct color, the intensity of which varies in proportion to the amount of a compound in a solution D. Other physicochemical Methods - quantities of constituents from measurements of such properties as the turbidity od suspensions of semi colloidal precipitate E. Gas Volumetric Methods - determination of mixture of gases VOLUMETRIC ANALYSIS Divisions A. Neutralization methods - acidimetry (acid determination) and alkalimetry (alkali determination B. Oxidation and Reduction - “redox” - oxidimetry and reductimetry C. Precipitation methods - precipitimetry D. Complex formation methods - complexometry DEFINITION OF TERMS ● Titration - determination of the concentration (unknown) of a solution by the gradual addition of a standard (known) solution ● Titrant - solution of known concentration (reagent solution) ● Analyte - substance under study (substance being titrated) ● Standard solution - solution whose concentration is accurately known a. Standard acid solutions - HCl, HNO 3 , H 2 SO 4 , H 2 C 2 O 4 b. Standard alkali solutions - NaOH, KOH, Ba(OH) 2 ● Standardization - process of determining the exact concentration of a titrant ● Primary standard - a substance of highest purity used for standardization a. Characteristics: highest purity, soluble in water, stable towards air, high temperature and humidity Example: potassium hydrogen phthalate(KHP) or KHC 8 H 4 O 4 , sodium carbonate, sulfuric acid

MODULE 2 :

INTRODUCTION TO QUANTITATIVE ANALYSIS

LRRAMOS

● Indicator - a substance (colored dye) that is sensitive hydrogen ion concentration, changes in color once the endpoint is reached (from acidity to alkalinity and vice versa) Example: phenolphthalein, methyl orange, congo red, bromocresol green, litmus paper ● Endpoint - point in titration where the color of the indicator changes which coincides with the EQUIVALENCE POINT ● Equivalence point - the point in titration in which the chemical reaction is completed or the point of complete neutralization. This is the point where the acid and the base added in amounts are equivalent to each other ● Acidimetry - an acidic substance is titrated with a standard solution of an alkali ● Alkalimetry - basic substance is titrated with a standard solution of an acid ● Normality - equivalents of solute/ Liter of solution ● Neutralization - a reaction between a base and an acid to form salt and water ● Precision - refers to how closely repeated measurements of the same quantity come to each other ● Accuracy - refers to how closely an experimental observation agree with the true or correct value REAGENTS A. Technical or Commercial Grade

  • Subjected to only superficial purification and are seldom used in analytical work
  • Used in the preparation of a cleaning solution and other solutions not directly involved in analytical processes B. Chemically pure
  • High degree of purity
  • Amounts of impurities are too small to cause interference in the process in which the reagent is used C. Reagent Grade
  • Reagent which have been analysed by the manufacturer and the analyses of which are given on the labels of the containers are favoured by analytical chemists D. Primary Standard
  • A substance of highest purity used for standardization LABORATORY APPARATUSES AND EQUIPMENTS A. Wash Bottle
  • One of the most useful tools in quantitative analysis. Its principal use is to deliver to a graduate or other container a small quantity of distilled water, either through the mouthpiece or through the jet and to wash precipitates B. Centrifuge
  • Is a device driven by an electric motor generally used in order to separate the precipitate from the supernatant liquid.

LRRAMOS

Laboratory techniques are the processes and practices that are recommended for using the various equipment in the laboratory. Additionally, a unit operation is a basic step in a process. Unit operations involve bringing a physical change such as separation, crystallization, evaporation, filtration etc. For example, in milk processing, homogenization, pasteurization, chilling, and packaging are each unit operations which are connected to create the overall process. A process may have many unit operations to obtain the desired product. Historically, the different chemical industries were regarded as different industrial processes and with different principles. Use of Analytical Balance Analytical balances are the basic tools of quantitative analysis. They are used to accurately weigh samples and precipitates. The balances are able to provide accurate measurements to four decimal places, for example, 0.0001 grams. Due to the extremely sensitive nature of these instruments, there are several factors that can cause them to give erroneous readings. For an analytical balance to give an accurate reading, the instrument must be calibrated. Calibration is important because it defines the accuracy and quality of the measurements that are recorded by the balance. To ensure the integrity of the measurement results, there has to be an ongoing process of servicing and maintaining the calibration of the equipment throughout its lifetime. As a result, reliable, accurate, and repeatable measurements will always be achieved. Factors that can affect the accuracy of analytical balances: ● Temperature ● Vibrations ● Air currents ● Calibrations ● Slopes ● User errors ● Plastic or Glass Weigh Containers ● Improper handling of the sample Procedure:

LABORATORY TECHNIQUES AND UNIT OPERATIONS

LRRAMOS

  1. Make sure that you are working in a clean and organized station. Plug in the device to its AC source and ensure that it is properly plugged in to the sackett in order to avoid short circuits.
  2. Calibrate the device first before using it. The tare function should be performed no matter how much of a substance is to be weighed. Note: ● Center materials being weighed on the weighing pan ● Hygroscopic materials absorb moisture and they should be weighed rapidly ● If substances to be weighed are hot, first allow them to cool in a dry environment to avoid moisture absorption. For example, warm hygroscopic samples should be allowed to cool in a desiccator prior to weighing. This is particularly important when weighing samples that have just been removed from drying ovens.
  3. Open the door of the analytical balance. Using gloves, tongs or tweezers (not your bare fingers because fingerprints add mass) place the weighing container or weighing paper on the balance, shut the door and wait a few seconds until the unit stabilizes. It is a good practice to record the weight of the container or paper, but to get an accurate readout of the substance being weighed you must cancel out the weight of the container or paper. Do this by pressing the tare button until the display reads 0.0000 g. Note: Never weigh chemicals directly in contact with the balance pan. Use containers such as beakers, flasks and weighing bottles
  4. When weighing is completed, return the analytical balance to the standby mode. Use a soft brush to clean the weighing pan and cabinet. Remove spilled materials from the weighing area. For more extensive cleaning it is advisable to unplug the unit and use a damp cloth moistened with a mild soapy solution, taking care that no moisture enters the weighing apparatus. Dry the unit thoroughly with a soft towel and be certain that the balance door is shut. If the unit is unplugged it will have to be warmed up again before weighing can begin. Making quantitative transfers Quantitative transfer is used whenever a solution or suspension of precipitate must be transferred from one vessel to another without losing any analyte in the process. You will need to use this procedure when making solutions that require heating and stirring, or when transferring acid digests of materials to a volumetric flask. When making standard solutions, the substance in question is first weighed accurately by difference into a beaker, where it is dissolved in about half the final volume of solvent. After cooling, this solution is transferred in its entirety to a volumetric flask for dilution to final volume. When dealing with acid digests, the digest is similarly transferred in its entirety to a volumetric flask, passing through a filter paper/funnel if necessary to remove any solid residue.

LRRAMOS

Delivering an aliquot An aliquot is a measured fraction of the volume of a liquid sample. Fig. 2-22 of Fundamentals of Analytical Chemistry 9E, Skoog Steps in dispensing an aliquot: a. Draw a small amount of the liquid into the pipet and b. wet the interior surface of the glass by tilting and rotating the pipet. Repeat this procedure two more times. Then draw liquid into the pipet so that the level is a few centimeters above the line etched on the stem of the pipet. While holding the tip of the pipet against the inside surface of the volumetric flask c. allow the liquid level to descend until the bottom of the meniscus is aligned with the line d. Remove the pipet from the volumetric flask, e. tilt it until liquid is drawn slightly up into the pipet, and wipe the tip with a lintless tissue as shown. Then while holding the pipet vertically, f. allow the liquid to flow into the receiving flask until just a small amount of liquid remains in the inside of the tip and a drop remains on the outside. g. Tilt the flask slightly as shown in , and h. finally, touch the tip of the pipet to the inside of the flask When this step is completed, a small amount of liquid will remain in the pipet. Do not remove this remaining liquid. The pipet is calibrated to reproducibly deliver its rated volume when this liquid remains in the tip. .

LRRAMOS

Calibration of a pipet Pipets permit the transfer of accurately known volumes from one container to another. Common types are shown in Figure 2-17, and information concerning their use is given in Table 2-2. Fig. 2-17 of Fundamentals of Analytical Chemistry 9E, Skoog A volumetric , or transfer , pipet (Figure 2-17a) delivers a single, fixed volume between 0.5 and 200 mL. Many such pipets are color coded by volume for convenience in identification and sorting. Measuring pipets (Figure 2-17b and c) are calibrated in convenient units to permit delivery of any volume up to a maximum capacity ranging from 0.1 to 25 mL. All volumetric and measuring pipets are first filled to a calibration mark, but the manner in which the transfer is completed depends on the particular type. Because most liquids are attracted to glass, a small amount of liquid tends to remain in the tip after the pipet is emptied. This residual liquid is never blown out of a volumetric pipet or from some measuring pipets, but it is blown out of other types of pipets (see Table 2-2). Handheld Eppendorf micropipets (see Figure 2-17d and Figure 2-18a) deliver adjustable microliter volumes of liquid. With these pipets, a known and adjustable volume of air is displaced from the plastic disposable tip by depressing the pushbutton on the top of the pipet to a first stop. This button operates a spring loaded piston that forces air out of the pipet. The volume of displaced air can be varied by a locking digital micrometer adjustment located on the front or top of the device. The plastic tip is then inserted into the liquid, and the pressure on the button released, causing liquid to be drawn into the tip. The tip is then placed

LRRAMOS

Steps in calibration of a pipet: a. Determine the empty mass of the stoppered receiver to the nearest milligram. b. Transfer a portion of temperature-equilibrated water to the receiver with the pipet c. Weigh the receiver and its contents (again, to the nearest milligram) d. Calculate the mass of water delivered from the difference in these masses. e. With the aid of Table 2-3, calculate the volume delivered. f. Repeat the calibration several times, and calculate the mean volume delivered and its standard deviation. Table 2-3 of Fundamentals of Analytical Chemistry 9E, Skoog

LRRAMOS

Reading a Buret Fig. 2-19 of Fundamentals of Analytical Chemistry 9E, Skoog Burets , like measuring pipets, make it possible to deliver any volume up to the maximum capacity of the device. The precision attainable with a buret is substantially greater than the precision with a pipet. A buret consists of a calibrated tube to hold titrant plus a valve arrangement by which the flow of titrant is controlled. This valve is the principal source of difference among burets. The simplest pinchcock valve consists of a close-fitting glass bead inside a short length of rubber tubing that connects the buret and its tip (see Figure 2 - 19a). Only when the tubing is deformed does liquid flow past the bead. A buret equipped with a glass stopcock for a valve relies on a lubricant between the ground-glass surfaces of stopcock and barrel for a liquid-tight seal. Some solutions, notably bases, cause glass stopcocks to freeze when they are in contact with ground glass for long periods. Therefore, glass stopcocks must be thoroughly cleaned after each use. Most burets made in the last several of decades have Teflon® valves, which are unaffected by most common reagents and require no lubricant (see Figure 2-19b). To read a buret:

LRRAMOS

air displaces cooler air and directs it towards the heating element until the entire chamber reaches the desired temperature. This method is known to have poor uniform heat distribution, a longer preheat time, and energy waste compared to mechanical convection. Mechanical convection laboratory furnaces use blowers and baffles in the furnace chamber, which helps to heat the chamber quicker and more evenly, making them a more popular choice. Typical functions that laboratory furnaces perform are drying, where moisture is removed from the sample, baking, where a sample is heated without dehumidification, and curing, where the sample is chemically or physically altered. Use of pH Meter A pH meter is an instrument used to measure acidity or alkalinity of a solution - also known as pH. pH is the unit of measure that describes the degree of acidity or alkalinity. It is measured on a scale of 0 to 14. The quantitative information provided by the pH value expresses the degree of the activity of an acid or base in terms of hydrogen ion activity. The pH value of a substance is directly related to the ratio of the hydrogen ion [H+] and the hydroxyl ion [OH-] concentrations. If the H+ concentration is greater than OH-, the material is acidic; i.e., the pH value is less than 7. If the OH-^ concentration is greater than H+, the material is basic, with a pH value greater than 7. If equal amounts of H+^ and OH-^ ions are present, the material is neutral, with a pH of 7. Acids and bases have free hydrogen and hydroxyl ions, respectively. The

LRRAMOS

relationship between hydrogen ions and hydroxyl ions in a given solution is constant for a given set of conditions, either one can be determined by knowing the other. Calibration Procedure

  1. Turn the meter's power on by pressing the "power" button.
  2. Select the pH Mode and set the temperature control knob to 25°C. Adjust the cal 2 knob to read 100%.
  3. Rinse the electrode with deionized water and blot dry using a piece of tissue.
  4. Place the electrode in the solution of pH 7 buffer, allow the display to stabilize and, then, set the display to read 7 by adjusting cal 1. Remove the electrode from the buffer.
  5. Rinse the electrode with deionized water and blot dry using a piece of tissue.
  6. Place the electrode in the solution of pH 2 buffer, allow the display to stabilize and, then, set the display to read 2 by adjusting cal 2. Remove the electrode from the buffer.
  7. Rinse the electrode with deionized water and blot dry using a piece of tissue. NOTE - Buffer solutions are made available to you in individually labeled 2 oz. bottles. The buffers are to be used in these containers, only! Do not pour them into other containers at any time. After use, cap the bottles so that the buffers can be reused. Measuring pH
  8. Make sure that the meter is set to the pH Mode and adjust the temperature to 25°C.
  9. Place the electrode in the sample to be tested.
  10. The pH of the solution appears in the display. NOTE: Allow the display to stabilize before taking your reading!
  11. Rinse the pH electrode and place it back in the storage solution. Dilution

LRRAMOS

Since analytical chemistry is the science of making quantitative measurements, it is important that raw data is manipulated and reported correctly to give a realistic estimate of the uncertainty in a result. Simple data manipulations may only require keeping track of significant figures .The benefits of standardizing any type of data are ensuring data quality through consistency, to support data integrity and to improve data shareability, i.e., collaborative research. Accuracy, precision, errors, significant figures, rounding off, propagation of errors Precision describes the reproducibility of measurements—in other words, the closeness of results that have been obtained in exactly the same way. Generally, the precision of a measurement is readily determined by simply repeating the measurement on replicate samples. Three terms are widely used to describe the precision of a set of replicate data: standard deviation, variance, and coefficient of variation. These three are functions of how much an individual result xi differs from the mean, called the deviation from the mean di. Accuracy indicates the closeness of the measurement to the true or accepted value and is expressed by the error. Figure 5 - 2 illustrates the difference between accuracy and precision. Note that accuracy measures agreement between a result and the Accepted value. Precision , on the other hand, describes the agreement among several results obtained in the same way. We can determine precision just by measuring replicate samples. Accuracy is often more difficult to determine because the true value is usually unknown. An accepted value must be used instead. Accuracy is expressed in terms of either absolute or relative error. Fig 5- 2 Fundamentals of Analytical Chemistry 9E, Skoog

DATA HANDLING AND MANAGEMENT

LRRAMOS

Illustration of accuracy and precision using the pattern of darts on a dartboard. Note that we can have very precise results (upper right) with a mean that is not accurate and an accurate mean (lower left) with data points that are imprecise. The absolute error E in the measurement of a quantity x is given by the equation where xt is the true or accepted value of the quantity. The absolute error of a measurement is the difference between the measured value and the true value. The sign of the absolute error tells you whether the value in question is high or low. If the measurement result is low, the sign is negative; if the measurement result is high, the sign is positive. The relative error Er is often a more useful quantity than the absolute error. The percent relative error is given by the expression, Relative error is also expressed in parts per thousand (ppt). For example, The relative error of a measurement is the absolute error divided by the true value. Relative error may be expressed in percent, parts per thousand, or parts per million, depending on the magnitude of the result. Types of error in experimentation data The precision of a measurement is readily determined by comparing data from carefully replicated experiments. Unfortunately, an estimate of the accuracy is not as easy to obtain. To determine the accuracy, we have to know the true value, which is usually what we are seeking in the analysis.

  • Random (or indeterminate) error , causes data to be scattered more or less symmetrically around a mean value. In general, then, the random error in a measurement is reflected by its precision.
  • Systematic (or determinate) error , causes the mean of a data set to differ from the accepted value. In general, a systematic error in a series of replicate measurements causes all the results to be too high or too low. An example of a systematic error is the loss of a volatile analyte while heating a sample.

LRRAMOS

errors result from the carelessness, inattention, or personal limitations of the experimenter. Of the three types of systematic errors encountered in a chemical analysis, method errors are usually the most difficult to identify and correct.

  • Personal errors result from the carelessness, inattention, or personal limitations of the experimenter. Many measurements require personal judgments. Examples include estimating the position of a pointer between two scale divisions, the color of a solution at the end point in a titration, or the level of a liquid with respect to a graduation in a pipet or buret. Judgments of this type are often subject to systematic, unidirectional errors. For example, one person may read a pointer consistently high, while another may be slightly slow in activating a timer. Yet, a third may be less sensitive to color changes, with an analyst who is insensitive to color changes tending to use excess reagent in a volumetric analysis. Analytical procedures should always be adjusted so that any known physical limitations of the analyst cause negligibly small errors. Automation of analytical procedures can eliminate many errors of this type. A universal source of personal error is prejudice, or bias. Most of us, no matter how honest, have a natural, subconscious tendency to estimate scale readings in a direction that improves the precision in a set of results. Alternatively, we may have a preconceived notion of the true value for the measurement. We then subconsciously cause the results to fall close to this value. Number bias is another source of personal error that varies considerably from person to person. The most frequent number bias encountered in estimating the position of a needle on a scale involves a preference for the digits 0 and 5. Also common is a prejudice favoring small digits over large and even numbers over odd. Again, automated and computerized instruments can eliminate this form of bias. The Effect of Systematic Errors on Analytical Results Systematic errors may be either constant or proportional. The magnitude of a constant error stays essentially the same as the size of the quantity measured is varied. With constant errors, the absolute error is constant with sample size, but the relative error varies when the sample size is changed. Proportional errors increase or decrease according to the size of the sample taken for analysis. With proportional errors, the absolute error varies with sample size, but the relative error stays constant when the sample size is changed. The effect of a constant error becomes more serious as the size of the quantity measured decreases.

LRRAMOS

The excess of reagent needed to bring about a color change during a titration is another example of constant error. This volume, usually small, remains the same regardless of the total volume of reagent required for the titration. Again, the relative error from this source becomes more serious as the total volume decreases. One way of reducing the effect of constant error is to increase the sample size until the error is acceptable. A common cause of proportional errors is the presence of interfering contaminants in the sample. Constant errors are independent of the size of the sample being analyzed. Proportional errors decrease or increase in proportion to the size of the sample. Some systematic instrument errors can be found and corrected by calibration. Periodic calibration of equipment is always desirable because the response of most instruments changes with time as a result of component aging, corrosion, or mistreatment. Most personal errors can be minimized by careful, disciplined laboratory work. It is a good habit to check instrument readings, notebook entries, and calculations systematically. Errors due to limitations of the experimenter can usually be avoided by carefully choosing the analytical method or using an automated procedure. Use of spreadsheets and calibration curves In external standard calibration, a series of standard solutions is prepared separately from the sample. The standards are used to establish the instrument calibration function, which is obtained from analysis of the instrument response as a function of the known analyte concentration. Ideally, three or more standard solutions are used in the calibration process, although in some routine determinations, two-point calibrations can be reliable. The calibration function can be obtained graphically or in mathematical form. Generally, a plot of instrument response versus known analyte concentrations is used to produce a calibration curve, sometimes called a working curve. It is often desirable that the calibration curve be linear in at least the range of the analyte concentrations. A linear calibration curve of absorbance versus analyte concentration is shown in Figure 8-9. For graphical methods, a straight line is drawn through the data points (shown as circles). The linear relationship is then used to predict the concentration of an unknown analyte solution shown here with an absorbance of 0.505. Graphically, this prediction is done by locating the absorbance on the line and then finding the concentration corresponding to that absorbance (0.0044 M). The concentration found is then related back to the analyte concentration in the original sample by applying appropriate dilution factors from the sample preparation steps. Computerized numerical data analysis has largely replaced graphical calibration methods, which are now seldom used except for visual