995 resultados para Human Errors
Resumo:
We construct estimates of educational attainment for a sample of OECD countries using previously unexploited sources. We follow a heuristic approach to obtain plausible time profiles for attainment levels by removing sharp breaks in the data that seem to reflect changes in classification criteria. We then construct indicators of the information content of our series and a number of previously available data sets and examine their performance in several growth specifications. We find a clear positive correlation between data quality and the size and significance of human capital coefficients in growth regressions. Using an extension of the classical errors in variables model, we construct a set of meta-estimates of the coefficient of years of schooling in an aggregate Cobb-Douglas production function. Our results suggest that, after correcting for measurement error bias, the value of this parameter is well above 0.50.
Resumo:
Employing an endogenous growth model with human capital, this paper explores how productivity shocks in the goods and human capital producing sectors contribute to explaining aggregate fluctuations in output, consumption, investment and hours. Given the importance of accounting for both the dynamics and the trends in the data not captured by the theoretical growth model, we introduce a vector error correction model (VECM) of the measurement errors and estimate the model’s posterior density function using Bayesian methods. To contextualize our findings with those in the literature, we also assess whether the endogenous growth model or the standard real business cycle model better explains the observed variation in these aggregates. In addressing these issues we contribute to both the methods of analysis and the ongoing debate regarding the effects of innovations to productivity on macroeconomic activity.
Resumo:
We briefly review findings from Brazilian settings where the human immunodeficiency virus/acquired immunodeficiency syndrome (HIV/AIDS) epidemic among injection drug users (IDUs) seems to be decreasing, highlighting recent findings from Rio de Janeiro and discussing methodological alternatives. Former analyses using serologic testing algorithm for recent HIV seroconversion have shown that HIV incidence has been low in IDUs recruited by two different surveys carried out in Rio, where low injection frequencies and infection rates have been found among new injectors. The proportion of AIDS cases among IDUs in Rio has been fairly modest, compared to São Paulo and especially to the southernmost states. Notwithstanding, the interpretation of findings from serial surveys constitutes a challenge, magnified in the assessment of HIV spread among IDUs due to the dynamic nature of the drug scenes and limitations of sampling strategies targeting hard-to-reach populations. Assessment of epidemic trends may profit from the triangulation of data, but cannot avert biases associated with sampling errors. Efforts should be made to triangulate data from different sources, besides exploring specific studies from different perspectives. In an attempt to further assess the observed trends, we carried out original analyses using data from Brazilian AIDS databank.
Resumo:
Summary of food stamp errors.
Resumo:
Saffaj et al. recently criticized our method of monitoring carbon dioxide in human postmortem cardiac gas samples using Headspace-Gas Chromatography-Mass Spectrometry. According to the authors, their demonstration, based on the latest SFSTP guidelines (established after 2007 [1,2]) fitted for the validation of drug monitoring bioanalytical methods, has put in evidence potential errors. However, our validation approach was built using SFSTP guidelines established before 2007 [3-6]. We justify the use of these guidelines because of the post-mortem context of the study (and not clinical) and the gaseous state of the sample (and not solid or liquid). Using these guidelines, our validation remains correct.
Resumo:
A-8A summary of food stamp errors active and negative cases
Resumo:
Summary of food stamp errors.
Resumo:
A-8a Summary of Food Stamp Errors Active and Negative Cases, Apr. 2004 - Sept. 2004
Resumo:
A-8a Summary of Food Stamp Errors Active and Negative Cases, Oct. 2004 - Mar. 2005
Resumo:
Summary of food stamp errors.
Resumo:
Summary of food stamp errors.
Resumo:
Summary of food stamp errors.
Resumo:
New Global Positioning System (GPS) receivers allow now to measure a location on earth at high frequency (5Hz) with a centimetric precision using phase differential positioning method. We studied whether such technique was accurate enough to retrieve basic parameters of human locomotion. Eight subjects walked on an athletics track at four different imposed step frequencies (70-130steps/min) plus a run at free pace. Differential carrier phase localization between a fixed base station and the mobile antenna mounted on the walking person was calculated. In parallel, a triaxial accelerometer, attached to the low back, recorded body accelerations. The different parameters were averaged for 150 consecutive steps of each run for each subject (total of 6000 steps analyzed). We observed a perfect correlation between average step duration measured by accelerometer and by GPS (r=0.9998, N=40). Two important parameters for the calculation of the external work of walking were also analyzed, namely the vertical lift of the trunk and the velocity variation per step. For an average walking speed of 4.0km/h, average vertical lift and velocity variation were, respectively, 4.8cm and 0.60km/h. The average intra-individual step-to-step variability at a constant speed, which includes GPS errors and the biological gait style variation, were found to be 24. 5% (coefficient of variation) for vertical lift and 44.5% for velocity variation. It is concluded that GPS technique can provide useful biomechanical parameters for the analysis of an unlimited number of strides in an unconstrained free-living environment.
Resumo:
Abstract : The human body is composed of a huge number of cells acting together in a concerted manner. The current understanding is that proteins perform most of the necessary activities in keeping a cell alive. The DNA, on the other hand, stores the information on how to produce the different proteins in the genome. Regulating gene transcription is the first important step that can thus affect the life of a cell, modify its functions and its responses to the environment. Regulation is a complex operation that involves specialized proteins, the transcription factors. Transcription factors (TFs) can bind to DNA and activate the processes leading to the expression of genes into new proteins. Errors in this process may lead to diseases. In particular, some transcription factors have been associated with a lethal pathological state, commonly known as cancer, associated with uncontrolled cellular proliferation, invasiveness of healthy tissues and abnormal responses to stimuli. Understanding cancer-related regulatory programs is a difficult task, often involving several TFs interacting together and influencing each other's activity. This Thesis presents new computational methodologies to study gene regulation. In addition we present applications of our methods to the understanding of cancer-related regulatory programs. The understanding of transcriptional regulation is a major challenge. We address this difficult question combining computational approaches with large collections of heterogeneous experimental data. In detail, we design signal processing tools to recover transcription factors binding sites on the DNA from genome-wide surveys like chromatin immunoprecipitation assays on tiling arrays (ChIP-chip). We then use the localization about the binding of TFs to explain expression levels of regulated genes. In this way we identify a regulatory synergy between two TFs, the oncogene C-MYC and SP1. C-MYC and SP1 bind preferentially at promoters and when SP1 binds next to C-NIYC on the DNA, the nearby gene is strongly expressed. The association between the two TFs at promoters is reflected by the binding sites conservation across mammals, by the permissive underlying chromatin states 'it represents an important control mechanism involved in cellular proliferation, thereby involved in cancer. Secondly, we identify the characteristics of TF estrogen receptor alpha (hERa) target genes and we study the influence of hERa in regulating transcription. hERa, upon hormone estrogen signaling, binds to DNA to regulate transcription of its targets in concert with its co-factors. To overcome the scarce experimental data about the binding sites of other TFs that may interact with hERa, we conduct in silico analysis of the sequences underlying the ChIP sites using the collection of position weight matrices (PWMs) of hERa partners, TFs FOXA1 and SP1. We combine ChIP-chip and ChIP-paired-end-diTags (ChIP-pet) data about hERa binding on DNA with the sequence information to explain gene expression levels in a large collection of cancer tissue samples and also on studies about the response of cells to estrogen. We confirm that hERa binding sites are distributed anywhere on the genome. However, we distinguish between binding sites near promoters and binding sites along the transcripts. The first group shows weak binding of hERa and high occurrence of SP1 motifs, in particular near estrogen responsive genes. The second group shows strong binding of hERa and significant correlation between the number of binding sites along a gene and the strength of gene induction in presence of estrogen. Some binding sites of the second group also show presence of FOXA1, but the role of this TF still needs to be investigated. Different mechanisms have been proposed to explain hERa-mediated induction of gene expression. Our work supports the model of hERa activating gene expression from distal binding sites by interacting with promoter bound TFs, like SP1. hERa has been associated with survival rates of breast cancer patients, though explanatory models are still incomplete: this result is important to better understand how hERa can control gene expression. Thirdly, we address the difficult question of regulatory network inference. We tackle this problem analyzing time-series of biological measurements such as quantification of mRNA levels or protein concentrations. Our approach uses the well-established penalized linear regression models where we impose sparseness on the connectivity of the regulatory network. We extend this method enforcing the coherence of the regulatory dependencies: a TF must coherently behave as an activator, or a repressor on all its targets. This requirement is implemented as constraints on the signs of the regressed coefficients in the penalized linear regression model. Our approach is better at reconstructing meaningful biological networks than previous methods based on penalized regression. The method is tested on the DREAM2 challenge of reconstructing a five-genes/TFs regulatory network obtaining the best performance in the "undirected signed excitatory" category. Thus, these bioinformatics methods, which are reliable, interpretable and fast enough to cover large biological dataset, have enabled us to better understand gene regulation in humans.
Resumo:
The market place of the twenty-first century will demand that manufacturing assumes a crucial role in a new competitive field. Two potential resources in the area of manufacturing are advanced manufacturing technology (AMT) and empowered employees. Surveys in Finland have shown the need to invest in the new AMT in the Finnish sheet metal industry in the 1990's. In this run the focus has been on hard technology and less attention is paid to the utilization of human resources. In manymanufacturing companies an appreciable portion of the profit within reach is wasted due to poor quality of planning and workmanship. The production flow production error distribution of the sheet metal part based constructions is inspectedin this thesis. The objective of the thesis is to analyze the origins of production errors in the production flow of sheet metal based constructions. Also the employee empowerment is investigated in theory and the meaning of the employee empowerment in reducing the overall production error amount is discussed in this thesis. This study is most relevant to the sheet metal part fabricating industrywhich produces sheet metal part based constructions for electronics and telecommunication industry. This study concentrates on the manufacturing function of a company and is based on a field study carried out in five Finnish case factories. In each studied case factory the most delicate work phases for production errors were detected. It can be assumed that most of the production errors are caused in manually operated work phases and in mass production work phases. However, no common theme in collected production error data for production error distribution in the production flow can be found. Most important finding was still that most of the production errors in each case factory studied belong to the 'human activity based errors-category'. This result indicates that most of the problemsin the production flow are related to employees or work organization. Development activities must therefore be focused to the development of employee skills orto the development of work organization. Employee empowerment gives the right tools and methods to achieve this.