Monday 26 November 2012

Types of Sampling


Statistics: Introduction

 

Population vs Sample
The population includes all objects of interest whereas the sample is only a portion of the population. Parameters are associated with populations and statistics with samples. Parameters are usually denoted using Greek letters (mu, sigma) while statistics are usually denoted using Roman letters (x, s).
There are several reasons why we don't work with populations. They are usually large, and it is often impossible to get data for every object we're studying. Sampling does not usually occur without cost, and the more items surveyed, the larger the cost.
We compute statistics, and use them to estimate parameters. The computation is the first part of the statistics course (Descriptive Statistics) and the estimation is the second part (Inferential Statistics)
Discrete vs Continuous
Discrete variables are usually obtained by counting. There are a finite or countable number of choices available with discrete data. You can't have 2.63 people in the room.
Continuous variables are usually obtained by measuring. Length, weight, and time are all examples of continous variables. Since continuous variables are real numbers, we usually round them. This implies a boundary depending on the number of decimal places. For example: 64 is really anything 63.5 <= x < 64.5. Likewise, if there are two decimal places, then 64.03 is really anything 63.025 <= x < 63.035. Boundaries always have one more decimal place than the data and end in a 5.
Levels of Measurement
There are four levels of measurement: Nominal, Ordinal, Interval, and Ratio. These go from lowest level to highest level. Data is classified according to the highest level which it fits. Each additional level adds something the previous level didn't have.
  • Nominal is the lowest level. Only names are meaningful here.
  • Ordinal adds an order to the names.
  • Interval adds meaningful differences
  • Ratio adds a zero so that ratios are meaningful.

    Sampling Risks

    There are two types of sampling risks, first is the risk of incorrect acceptance of the research hypothesis and the second is the risk for incorrect rejection. These risks pertain to the possibility that when a test is conducted to a sample, the results and conclusions may be different from the results and conclusions when the test is conducted to the entire population.
    The risk of incorrect acceptance pertains to the risk that the sample can yield a conclusion that supports a theory about the population when it is actually not existent in the population. On the other hand, the risk of incorrect rejection pertains to the risk that the sample can yield a conclusion that rejects a theory about the population when in fact, the theory holds true in the population.
    Comparing the two types of risks, researchers fear the risk of incorrect rejection more than the risk of incorrect acceptance. Consider this example; an experimental drug was tested for its debilitating side effects. With the risk of incorrect acceptance, the researcher will conclude that the drug indeed has negative side effects but the truth is that it doesn’t. The entire population will then abstain from taking the drug. But with the risk of incorrect rejection, the researcher will conclude that the drug has no negative side effects. The entire population will then take the drug knowing that it has no side effects but all of them will then suffer the consequences of the mistake of the researcher.


    Read more: Statistical Sampling Techniques.
Types of Sampling
There are five types of sampling: Random, Systematic, Convenience, Cluster, and Stratified.
    • Random sampling is analogous to putting everyone's name into a hat and drawing out several names. Each element in the population has an equal chance of occuring. While this is the preferred way of sampling, it is often difficult to do. It requires that a complete list of every element in the population be obtained. Computer generated lists are often used with random sampling. You can generate random numbers using the TI82 calculator.
    • Systematic sampling is easier to do than random sampling. In systematic sampling, the list of elements is "counted off". That is, every kth element is taken. This is similar to lining everyone up and numbering off "1,2,3,4; 1,2,3,4; etc". When done numbering, all people numbered 4 would be used.
    • Convenience sampling is very easy to do, but it's probably the worst technique to use. In convenience sampling, readily available data is used. That is, the first people the surveyor runs into.
    • Cluster sampling is accomplished by dividing the population into groups -- usually geographically. These groups are called clusters or blocks. The clusters are randomly selected, and each element in the selected clusters are used.
    • Stratified sampling also divides the population into groups called strata. However, this time it is by some characteristic, not geographically. For instance, the population might be separated into males and females. A sample is taken from each of these strata using either random, systematic, or convenience sampling
    • Sampling Risks

      There are two types of sampling risks, first is the risk of incorrect acceptance of the research hypothesis and the second is the risk for incorrect rejection. These risks pertain to the possibility that when a test is conducted to a sample, the results and conclusions may be different from the results and conclusions when the test is conducted to the entire population.
      The risk of incorrect acceptance pertains to the risk that the sample can yield a conclusion that supports a theory about the population when it is actually not existent in the population. On the other hand, the risk of incorrect rejection pertains to the risk that the sample can yield a conclusion that rejects a theory about the population when in fact, the theory holds true in the population.
      Comparing the two types of risks, researchers fear the risk of incorrect rejection more than the risk of incorrect acceptance. Consider this example; an experimental drug was tested for its debilitating side effects. With the risk of incorrect acceptance, the researcher will conclude that the drug indeed has negative side effects but the truth is that it doesn’t. The entire population will then abstain from taking the drug. But with the risk of incorrect rejection, the researcher will conclude that the drug has no negative side effects. The entire population will then take the drug knowing that it has no side effects but all of them will then suffer the consequences of the mistake of the researcher.
       

Sampling Risks

There are two types of sampling risks, first is the risk of incorrect acceptance of the research hypothesis and the second is the risk for incorrect rejection. These risks pertain to the possibility that when a test is conducted to a sample, the results and conclusions may be different from the results and conclusions when the test is conducted to the entire population.
The risk of incorrect acceptance pertains to the risk that the sample can yield a conclusion that supports a theory about the population when it is actually not existent in the population. On the other hand, the risk of incorrect rejection pertains to the risk that the sample can yield a conclusion that rejects a theory about the population when in fact, the theory holds true in the population.
Comparing the two types of risks, researchers fear the risk of incorrect rejection more than the risk of incorrect acceptance. Consider this example; an experimental drug was tested for its debilitating side effects. With the risk of incorrect acceptance, the researcher will conclude that the drug indeed has negative side effects but the truth is that it doesn’t. The entire population will then abstain from taking the drug. But with the risk of incorrect rejection, the researcher will conclude that the drug has no negative side effects. The entire population will then take the drug knowing that it has no side effects but all of them will then suffer the consequences of the mistake of the researcher.


Read more: Statistical Sampling Techniques.

Sampling Risks

There are two types of sampling risks, first is the risk of incorrect acceptance of the research hypothesis and the second is the risk for incorrect rejection. These risks pertain to the possibility that when a test is conducted to a sample, the results and conclusions may be different from the results and conclusions when the test is conducted to the entire population.
The risk of incorrect acceptance pertains to the risk that the sample can yield a conclusion that supports a theory about the population when it is actually not existent in the population. On the other hand, the risk of incorrect rejection pertains to the risk that the sample can yield a conclusion that rejects a theory about the population when in fact, the theory holds true in the population.
Comparing the two types of risks, researchers fear the risk of incorrect rejection more than the risk of incorrect acceptance. Consider this example; an experimental drug was tested for its debilitating side effects. With the risk of incorrect acceptance, the researcher will conclude that the drug indeed has negative side effects but the truth is that it doesn’t. The entire population will then abstain from taking the drug. But with the risk of incorrect rejection, the researcher will conclude that the drug has no negative side effects. The entire population will then take the drug knowing that it has no side effects but all of them will then suffer the consequences of the mistake of the researcher.


Read more: Statistical Sampling Techniques.

CHI SQUARE

 CHI SQUARE

Types of Data:

There are basically two types of random variables and they yield two types of data: numerical and categorical. A chi square (X2) statistic is used to investigate whether distributions of categorical variables differ from one another. Basically categorical variable yield data in the categories and numerical variables yield data in numerical form. Responses to such questions as "What is your major?" or Do you own a car?" are categorical because they yield data such as "biology" or "no." In contrast, responses to such questions as "How tall are you?" or "What is your G.P.A.?" are numerical. Numerical data can be either discrete or continuous. The table below may help you see the differences between these two variables.
 Data Type  Question Type Possible Responses
 Categorical  What is your sex? male or female
 Numerical Disrete- How many cars do you own? two or three
 Numerical Continuous - How tall are you?  72 inches
Notice that discrete data arise fom a counting process, while continuous data arise from a measuring process.
The Chi Square statistic compares the tallies or counts of categorical responses between two (or more) independent groups. (note: Chi square tests can only be used on actual numbers and not on percentages, proportions, means, etc.)

2 x 2 Contingency Table

There are several types of chi square tests depending on the way the data was collected and the hypothesis being tested. We'll begin with the simplest case: a 2 x 2 contingency table. If we set the 2 x 2 table to the general notation shown below in Table 1, using the letters a, b, c, and d to denote the contents of the cells, then we would have the following table:
Table 1. General notation for a 2 x 2 contingency table.
Variable 1
 Variable 2
 Data type 1
 Data type 2
 Totals
 Category 1
 a
b
a + b
 Category 2
 c
d
c + d
 Total
a + c
b + d
a + b + c + d = N
For a 2 x 2 contingency table the Chi Square statistic is calculated by the formula:
Note: notice that the four components of the denominator are the four totals from the table columns and rows.
Suppose you conducted a drug trial on a group of animals and you hypothesized that the animals receiving the drug would show increased heart rates compared to those that did not receive the drug. You conduct the study and collect the following data:
Ho: The proportion of animals whose heart rate increased is independent of drug treatment.
Ha: The proportion of animals whose heart rate increased is associated with drug treatment.

Table 2. Hypothetical drug trial results.
   Heart Rate
 Increased
 No Heart Rate
 Increase
Total
 Treated  36  14  50
 Not treated  30  25  55
 Total  66  39  105
Applying the formula above we get:
Chi square = 105[(36)(25) - (14)(30)]2 / (50)(55)(39)(66) = 3.418
Before we can proceed we eed to know how many degrees of freedom we have. When a comparison is made between one sample and another, a simple rule is that the degrees of freedom equal (number of columns minus one) x (number of rows minus one) not counting the totals for rows or columns. For our data this gives (2-1) x (2-1) = 1.
We now have our chi square statistic (x2 = 3.418), our predetermined alpha level of significance (0.05), and our degrees of freedom (df = 1). Entering the Chi square distribution table with 1 degree of freedom and reading along the row we find our value of x2 (3.418) lies between 2.706 and 3.841. The corresponding probability is between the 0.10 and 0.05 probability levels. That means that the p-value is above 0.05 (it is actually 0.065). Since a p-value of 0.65 is greater than the conventionally accepted significance level of 0.05 (i.e. p > 0.05) we fail to reject the null hypothesis. In other words, there is no statistically significant difference in the proportion of animals whose heart rate increased.
What would happen if the number of control animals whose heart rate increased dropped to 29 instead of 30 and, consequently, the number of controls whose hear rate did not increase changed from 25 to 26? Try it. Notice that the new x2 value is 4.125 and this value exceeds the table value of 3.841 (at 1 degree of freedom and an alpha level of 0.05). This means that p < 0.05 (it is now0.04) and we reject the null hypothesis in favor of the alternative hypothesis - the heart rate of animals is different between the treatment groups. When p < 0.05 we generally refer to this as a significant difference.
Chi-square is a statistical test commonly used to compare observed data with data we would expect to obtain according to a specific hypothesis. For example, if, according to Mendel's laws, you expected 10 of 20 offspring from a cross to be male and the actual observed number was 8 males, then you might want to know about the "goodness to fit" between the observed and expected. Were the deviations (differences between observed and expected) the result of chance, or were they due to other factors. How much deviation can occur before you, the investigator, must conclude that something other than chance is at work, causing the observed to differ from the expected. The chi-square test is always testing what scientists call the null hypothesis, which states that there is no significant difference between the expected and observed result.

The formula for calculating chi-square ( 2) is:
2= (o-e)2/e
That is, chi-square is the sum of the squared difference between observed (o) and the expected (e) data (or the deviation, d), divided by the expected data in all possible categories.

For example, suppose that a cross between two pea plants yields a population of 880 plants, 639 with green seeds and 241 with yellow seeds. You are asked to propose the genotypes of the parents. Your hypothesis is that the allele for green is dominant to the allele for yellow and that the parent plants were both heterozygous for this trait. If your hypothesis is true, then the predicted ratio of offspring from this cross would be 3:1 (based on Mendel's laws) as predicted from the results of the Punnett square (Figure B. 1).


Figure B.1 - Punnett Square. Predicted offspring from cross between green and yellow-seeded plants. Green (G) is dominant (3/4 green; 1/4 yellow).









To calculate 2 , first determine the number expected in each category. If the ratio is 3:1 and the total number of observed individuals is 880, then the expected numerical values should be 660 green and 220 yellow.


Chi-square requires that you use numerical values, not percentages or ratios.


Then calculate 2 using this formula, as shown in Table B.1. Note that we get a value of 2.668 for 2. But what does this number mean? Here's how to interpret the 2 value:

1. Determine degrees of freedom (df). Degrees of freedom can be calculated as the number of categories in the problem minus 1. In our example, there are two categories (green and yellow); therefore, there is I degree of freedom.
2. Determine a relative standard to serve as the basis for accepting or rejecting the hypothesis. The relative standard commonly used in biological research is p > 0.05. The p value is the probability that the deviation of the observed from that expected is due to chance alone (no other forces acting). In this case, using p > 0.05, you would expect any deviation to be due to chance alone 5% of the time or less.
3. Refer to a chi-square distribution table (Table B.2). Using the appropriate degrees of 'freedom, locate the value closest to your calculated chi-square in the table. Determine the closestp (probability) value associated with your chi-square and degrees of freedom. In this case (2=2.668), the p value is about 0.10, which means that there is a 10% probability that any deviation from expected results is due to chance only. Based on our standard p > 0.05, this is within the range of acceptable deviation. In terms of your hypothesis for this example, the observed chi-squareis not significantly different from expected. The observed numbers are consistent with those expected under Mendel's law.
Step-by-Step Procedure for Testing Your Hypothesis and Calculating Chi-Square
1. State the hypothesis being tested and the predicted results. Gather the data by conducting the proper experiment (or, if working genetics problems, use the data provided in the problem).
2. Determine the expected numbers for each observational class. Remember to use numbers, not percentages.


Chi-square should not be calculated if the expected value in any category is less than 5.

3. Calculate 2 using the formula. Complete all calculations to three significant digits. Round off your answer to two significant digits.
4. Use the chi-square distribution table to determine significance of the value.
  1. Determine degrees of freedom and locate the value in the appropriate column.
  2. Locate the value closest to your calculated 2 on that degrees of freedom df row.
  3. Move up the column to determine the p value.
5. State your conclusion in terms of your hypothesis.
  1. If the p value for the calculated 2 is p > 0.05, accept your hypothesis. 'The deviation is small enough that chance alone accounts for it. A p value of 0.6, for example, means that there is a 60% probability that any deviation from expected is due to chance only. This is within the range of acceptable deviation.
  2. If the p value for the calculated 2 is p < 0.05, reject your hypothesis, and conclude that some factor other than chance is operating for the deviation to be so great. For example, a p value of 0.01 means that there is only a 1% chance that this deviation is due to chance alone. Therefore, other factors must be involved.
The chi-square test will be used to test for the "goodness to fit" between observed and expected data from several laboratory investigations in this lab manual.

Table B.1
Calculating Chi-Square

  Green Yellow
Observed (o) 639 241
Expected (e) 660 220
Deviation (o - e) -21 21
Deviation2 (d2) 441 441
d2/e 0.668 2
2 = d2/e = 2.668 . .


Table B.2
Chi-Square Distribution

Degrees of
Freedom
(df)

Probability (p)
  0.95 0.90 0.80 0.70 0.50 0.30 0.20 0.10 0.05 0.01 0.001
1
0.004 0.02 0.06 0.15 0.46 1.07 1.64 2.71 3.84 6.64 10.83
2
0.10 0.21 0.45 0.71 1.39 2.41 3.22 4.60 5.99 9.21 13.82
3
0.35 0.58 1.01 1.42 2.37 3.66 4.64 6.25 7.82 11.34 16.27
4
0.71 1.06 1.65 2.20 3.36 4.88 5.99 7.78 9.49 13.28 18.47
5
1.14 1.61 2.34 3.00 4.35 6.06 7.29 9.24 11.07 15.09 20.52
6
1.63 2.20 3.07 3.83 5.35 7.23 8.56 10.64 12.59 16.81 22.46
7
2.17 2.83 3.82 4.67 6.35 8.38 9.80 12.02 14.07 18.48 24.32
8
2.73 3.49 4.59 5.53 7.34 9.52 11.03 13.36 15.51 20.09 26.12
9
3.32 4.17 5.38 6.39 8.34 10.66 12.24 14.68 16.92 21.67 27.88
10
3.94 4.86 6.18 7.27 9.34 11.78 13.44 15.99 18.31 23.21 29.59
 
Nonsignificant
Significant

Friday 9 November 2012

Cost Of Capital



 Cost Of Capital
The required return necessary to make a capital budgeting project, such as building a new factory, worthwhile. Cost of capital includes the cost of debt and the cost of equity.
The cost of capital determines how a company can raise money (through a stock issue, borrowing, or a mix of the two). This is the rate of return that a firm would receive if it invested in a different vehicle with similar riskCost of capital is the minimum rate of investment which a company has to earn for getting fund .
When any company investor invests his money , he sees the rate of return . So , company has to mention , what will company pay , if investors provide their money to company . That average cost on the investment is called cost of capital . We calculate it with following way :-

Cost of capital = interest rate at zero level risk + premium for business risk + premium for financial risk


If a company has not power to earn , cost of capital , then this company can not get fund from public .

Importance of cost of capital


1. Basis of capital budgeting decisions:-


A company wants to invest his money in different project. Then it will compare their cost of capital and company will never invest his money in that project whose cost of capital is less than other project .


2. Basis of redesigning of capital structure :-


Cost of capital affects capital structure designing. Capital structure is just mixed of debt and equity sources which company wants to get from investors. At that time company selects that mixture of debt and equity in which cost of capital will be become minimum. With this company can increase the value of shares.


3. Basis of other decisions:-.
There are large number decision & like dividend policy, interest policy which is depend on correct calculation of cost capita
Factorsthe chice of source of finance 
1. Time factor
2. Cost of finance
3. SIZE OF THE BUSINESS
4. AVAILABILITY OF FINANCE
5. Flexibility of the firm to use other sources
6. mode of repayment
   
 

Thursday 1 November 2012

who is leeping on the job?



Dumb site in the middle of town
Narok town is the main Town to the gates of seven wonders of the world,the town,futher it is on Kisumu- Nairobi busy road,it is town where everyone business men and women of hard working nation would wishn to reside and burn the oil in the art of minding money.midle business men who are on the lookout for animals(cows,sheep and goats ) on sale camp in this town,those wishing to be engaged in wheat and maize business are too here. municpal council of Narok are among the richest collecting upto shs 2 M  per month if and all is well but the safety of inhabitants in this town has nad are forgotten,the greater town has no street lights,it is dirty and that raw sewage flow past the across  the town. sample of pictures taken at different times at different places reveal that nothing is and has been done to improve the condition,further,town usually is flooded during heavy rains,business people do incur a lot while trying to come to terms of floods since acts of municipal to curb the situation may be late or inadequate.
The Big quwestion is....has  NEMA  made any progress on the issue raised by minister long ago or who is sleeping on the job? is it NEMA officials,municipality or everybody especially those who dont raise relevant questions  has and when needed?
Raw sewage flowing in the middle of town


Environment Minister Chirau Mwakwere has expressed concern over pollution of Narok River.
Dirty linen in the river
Mwakwere also took issue with a dumpsite emitting solid and liquid waste into the river, endangering lives of people and animals living downstream“Many people in far-flung areas depend on this river. Pollution endangers lives of those who use its water for domestic consumption,” said Mwakwere.
He asked Narok Town Council to relocate the dumpsite as a short-term measure. The minister also said plastic and polythene disposal was worrying and challenged Kenyans to ensure the environment is clean.
another raw sewage
Mwakwere, who was accompanied by National Environment Management Authority Director Geoffrey Wakhungu, was speaking in Narok yesterday during an inspection of the dumpsite.
conservation of environment
“Change of land use is accelerating desertification. Residents should plant trees to conserve the environment,” he added.
He said his ministry would assist Narok Town Council to tackle perennial flooding that has in the past claimed lives and property.
The minister added there was need to incorporate other partners in conservation of the environment. Residents asked the minister to press the Government to evict the remaining Mau Forest settlers, lamenting that their continued stay poses a threat to survival of the eco-system.