915 resultados para Secondary Data Analysis
Resumo:
Reliability analysis is a well established branch of statistics that deals with the statistical study of different aspects of lifetimes of a system of components. As we pointed out earlier that major part of the theory and applications in connection with reliability analysis were discussed based on the measures in terms of distribution function. In the beginning chapters of the thesis, we have described some attractive features of quantile functions and the relevance of its use in reliability analysis. Motivated by the works of Parzen (1979), Freimer et al. (1988) and Gilchrist (2000), who indicated the scope of quantile functions in reliability analysis and as a follow up of the systematic study in this connection by Nair and Sankaran (2009), in the present work we tried to extend their ideas to develop necessary theoretical framework for lifetime data analysis. In Chapter 1, we have given the relevance and scope of the study and a brief outline of the work we have carried out. Chapter 2 of this thesis is devoted to the presentation of various concepts and their brief reviews, which were useful for the discussions in the subsequent chapters .In the introduction of Chapter 4, we have pointed out the role of ageing concepts in reliability analysis and in identifying life distributions .In Chapter 6, we have studied the first two L-moments of residual life and their relevance in various applications of reliability analysis. We have shown that the first L-moment of residual function is equivalent to the vitality function, which have been widely discussed in the literature .In Chapter 7, we have defined percentile residual life in reversed time (RPRL) and derived its relationship with reversed hazard rate (RHR). We have discussed the characterization problem of RPRL and demonstrated with an example that the RPRL for given does not determine the distribution uniquely
Resumo:
Atmospheric surface boundary layer parameters vary anomalously in response to the occurrence of annular solar eclipse on 15th January 2010 over Cochin. It was the longest annular solar eclipse occurred over South India with high intensity. As it occurred during the noon hours, it is considered to be much more significant because of its effects in all the regions of atmosphere including ionosphere. Since the insolation is the main driving factor responsible for the anomalous changes occurred in the surface layer due to annular solar eclipse, occurred on 15th January 2010, that played very important role in understanding dynamics of the atmosphere during the eclipse period because of its coincidence with the noon time. The Sonic anemometer is able to give data of zonal, meridional and vertical wind as well as the air temperature at a temporal resolution of 1 s. Different surface boundary layer parameters and turbulent fluxes were computed by the application of eddy correlation technique using the high resolution station data. The surface boundary layer parameters that are computed using the sonic anemometer data during the period are momentum flux, sensible heat flux, turbulent kinetic energy, frictional velocity (u*), variance of temperature, variances of u, v and w wind. In order to compare the results, a control run has been done using the data of previous day as well as next day. It is noted that over the specified time period of annular solar eclipse, all the above stated surface boundary layer parameters vary anomalously when compared with the control run. From the observations we could note that momentum flux was 0.1 Nm 2 instead of the mean value 0.2 Nm-2 when there was eclipse. Sensible heat flux anomalously decreases to 50 Nm 2 instead of the mean value 200 Nm 2 at the time of solar eclipse. The turbulent kinetic energy decreases to 0.2 m2s 2 from the mean value 1 m2s 2. The frictional velocity value decreases to 0.05 ms 1 instead of the mean value 0.2 ms 1. The present study aimed at understanding the dynamics of surface layer in response to the annular solar eclipse over a tropical coastal station, occurred during the noon hours. Key words: annular solar eclipse, surface boundary layer, sonic anemometer
Resumo:
Several eco-toxicological studies have shown that insectivorous mammals, due to their feeding habits, easily accumulate high amounts of pollutants in relation to other mammal species. To assess the bio-accumulation levels of toxic metals and their in°uence on essential metals, we quantified the concentration of 19 elements (Ca, K, Fe, B, P, S, Na, Al, Zn, Ba, Rb, Sr, Cu, Mn, Hg, Cd, Mo, Cr and Pb) in bones of 105 greater white-toothed shrews (Crocidura russula) from a polluted (Ebro Delta) and a control (Medas Islands) area. Since chemical contents of a bio-indicator are mainly compositional data, conventional statistical analyses currently used in eco-toxicology can give misleading results. Therefore, to improve the interpretation of the data obtained, we used statistical techniques for compositional data analysis to define groups of metals and to evaluate the relationships between them, from an inter-population viewpoint. Hypothesis testing on the adequate balance-coordinates allow us to confirm intuition based hypothesis and some previous results. The main statistical goal was to test equal means of balance-coordinates for the two defined populations. After checking normality, one-way ANOVA or Mann-Whitney tests were carried out for the inter-group balances
Resumo:
Factor analysis as frequent technique for multivariate data inspection is widely used also for compositional data analysis. The usual way is to use a centered logratio (clr) transformation to obtain the random vector y of dimension D. The factor model is then y = Λf + e (1) with the factors f of dimension k < D, the error term e, and the loadings matrix Λ. Using the usual model assumptions (see, e.g., Basilevsky, 1994), the factor analysis model (1) can be written as Cov(y) = ΛΛT + ψ (2) where ψ = Cov(e) has a diagonal form. The diagonal elements of ψ as well as the loadings matrix Λ are estimated from an estimation of Cov(y). Given observed clr transformed data Y as realizations of the random vector y. Outliers or deviations from the idealized model assumptions of factor analysis can severely effect the parameter estimation. As a way out, robust estimation of the covariance matrix of Y will lead to robust estimates of Λ and ψ in (2), see Pison et al. (2003). Well known robust covariance estimators with good statistical properties, like the MCD or the S-estimators (see, e.g. Maronna et al., 2006), rely on a full-rank data matrix Y which is not the case for clr transformed data (see, e.g., Aitchison, 1986). The isometric logratio (ilr) transformation (Egozcue et al., 2003) solves this singularity problem. The data matrix Y is transformed to a matrix Z by using an orthonormal basis of lower dimension. Using the ilr transformed data, a robust covariance matrix C(Z) can be estimated. The result can be back-transformed to the clr space by C(Y ) = V C(Z)V T where the matrix V with orthonormal columns comes from the relation between the clr and the ilr transformation. Now the parameters in the model (2) can be estimated (Basilevsky, 1994) and the results have a direct interpretation since the links to the original variables are still preserved. The above procedure will be applied to data from geochemistry. Our special interest is on comparing the results with those of Reimann et al. (2002) for the Kola project data
Resumo:
A presentation on the collection and analysis of data taken from SOES 6018. This module aims to ensure that MSc Oceanography, MSc Marine Science, Policy & Law and MSc Marine Resource Management students are equipped with the skills they need to function as professional marine scientists, in addition to / in conjuction with the skills training in other MSc modules. The module covers training in fieldwork techniques, communication & research skills, IT & data analysis and professional development.
Resumo:
Class exercise to analyse qualitative data mediated on use of a set of transcripts, augmented by videos from web site. Discussion is around not only how the data is codes, interview bias, dimensions of analysis. Designed as an introduction.
Resumo:
Social network has gained remarkable attention in the last decade. Accessing social network sites such as Twitter, Facebook LinkedIn and Google+ through the internet and the web 2.0 technologies has become more affordable. People are becoming more interested in and relying on social network for information, news and opinion of other users on diverse subject matters. The heavy reliance on social network sites causes them to generate massive data characterised by three computational issues namely; size, noise and dynamism. These issues often make social network data very complex to analyse manually, resulting in the pertinent use of computational means of analysing them. Data mining provides a wide range of techniques for detecting useful knowledge from massive datasets like trends, patterns and rules [44]. Data mining techniques are used for information retrieval, statistical modelling and machine learning. These techniques employ data pre-processing, data analysis, and data interpretation processes in the course of data analysis. This survey discusses different data mining techniques used in mining diverse aspects of the social network over decades going from the historical techniques to the up-to-date models, including our novel technique named TRCM. All the techniques covered in this survey are listed in the Table.1 including the tools employed as well as names of their authors.
Resumo:
In this paper a new parametric method to deal with discrepant experimental results is developed. The method is based on the fit of a probability density function to the data. This paper also compares the characteristics of different methods used to deduce recommended values and uncertainties from a discrepant set of experimental data. The methods are applied to the (137)Cs and (90)Sr published half-lives and special emphasis is given to the deduced confidence intervals. The obtained results are analyzed considering two fundamental properties expected from an experimental result: the probability content of confidence intervals and the statistical consistency between different recommended values. The recommended values and uncertainties for the (137)Cs and (90)Sr half-lives are 10,984 (24) days and 10,523 (70) days, respectively. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
This paper examines the effects of Ikea store establishment in Kalmar and Karlstad on the trade and retail inside the two cities, and as well on the trade and retail in the close neighboring municipalities and in further peripheral municipalities in both regions. After the establishment of Ikea store, Kalmar and Karlstad have experienced significant growth in trade and retail. The question, however, is how big this growth is in both cities? And how different locations on different distances from Ikea have been affected? What impact there was on different segments of the retail? How different business branches have been affected? How large the catchment area for the emerging new large-scale retail locations is? These questions, in addition to few others, are investigated in this paper. The thesis starts with an introduction chapter containing a background of the topic, problem description, the investigated questions, the purpose, and the outline of the paper. The next chapter includes the frame of reference which consists of literature review and theoretical framework about the external shopping centers and their impact on retail and regional trade development. It includes also information gathered from previous studies technical reports and other available sources about the subject. The third chapter includes description for the methods used to collect the primary and secondary data needed for the purpose of this study. Then the empirical framework which demonstrates the results of the conducted research followed by analysis and concluded in discussion and conclusion. Mixed methods are used as research strategy in this thesis, and the method to conduct the research is based on telephone interviews for the primary (qualitative) data, and documents and desk research for the secondary (quantitative) data. The gathered data is analyzed and designed in a way that allows the usage of comparative analysis technique to present the findings and draw conclusions. The results showed that new established Ikea retail store outside the city boundaries results with many effects on the city center and on the neighboring municipalities as well. The city center seems not to be affected negatively, but on the contrary positive effects were witnessed in both regions, these positive effects are linked to the increase inflow of customers from the external retail area which is known as spillover effect. III On the other hand, the neighboring towns and municipalities are more negatively affected especially with the trade of con-convenience goods as the consumers in these towns and municipalities start to go to the area of Ikea and the large external retail center to do their purchasing, the substitution effect is then said to be occurred. Moreover, the further far municipalities do not seem to be significantly affected by the establishment of Ikea. These effects whether positive or negative could be monitored by looking to few trade parameters such as the turnover, the sales index, and the consumers’ expenditure, these parameters can be very useful to measure the developments and changes in the trade and retail in a given place.
Resumo:
The term “social entrepreneurship” has been attracting growing interest from different sectors in the past years, driven by the possibility of employing business techniques to tackle recurrent social and environmental issues. At the forefront of this global phenomenon is microcredit, seen by many as an effective anti-poverty tool and having the Grameen Bank as its flagship program. While the prospects of social entrepreneurship seem promising, the newness of the concept and its somewhat confusing definition make conditions difficult to analyze this contemporary phenomenon. Therefore, the objective of this study was to discuss the challenges faced by social entrepreneurs and alternatives of development for social businesses through a case study on a Brazilian microcredit institution and inclusive business, Banco Pérola. The case addresses a growing need for case studies designed for teaching in the field of social entrepreneurship. It was focused mainly on understanding the development challenges within Banco Pérola, and built based on interviews carried out with top management, credit officer and clients of the institution, as well as on secondary data collected. An analysis of the case study was performed under a Teaching Notes. As illustrated by the Banco Pérola case, the main difficulties encountered by social entrepreneurs relate to the systematization of processes and creation of operational routines, including for performance evaluation (impact assessment tools); to the capture and management of both financial and human capital; to scaling up the business model and to the need of forging closer and more personal relationships with customers as against in traditional banking practices. In spite of certain limitations, such as the fact that the case might soon become outdated due to the fast-changing environment surrounding Banco Pérola, or the fact that not all relevant stakeholders (e.g. partners) were selected for interviews, the research objective has been achieved and the study can be seen as a contribution to spreading the concept of social entrepreneurship.
Resumo:
The study presents the results and recommendations deriving from the application of two supply chain management analysis models as proposed by the Supply Chain Council (SCOR, version 10.0) and by Lambert (1997, Framework for Supply Chain Management) on the logistics of cash transfers in Brazil. Cash transfers consist of the transportation of notes to and from each node of the complex network formed by the bank branches, ATMs, armored transportation providers, the government custodian, Brazilian Central Bank and financial institutions. Although the logistic to sustain these operations is so wide-ranged (country-size), complex and subject to a lot of financial regulations and security procedures, it has been detected that it was probably not fully integrated. Through the use of a primary and a secondary data research and analysis, using the above mentioned models, the study ends up with propositions to strongly improve the operations efficiency
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Dimensionality reduction is employed for visual data analysis as a way to obtaining reduced spaces for high dimensional data or to mapping data directly into 2D or 3D spaces. Although techniques have evolved to improve data segregation on reduced or visual spaces, they have limited capabilities for adjusting the results according to user's knowledge. In this paper, we propose a novel approach to handling both dimensionality reduction and visualization of high dimensional data, taking into account user's input. It employs Partial Least Squares (PLS), a statistical tool to perform retrieval of latent spaces focusing on the discriminability of the data. The method employs a training set for building a highly precise model that can then be applied to a much larger data set very effectively. The reduced data set can be exhibited using various existing visualization techniques. The training data is important to code user's knowledge into the loop. However, this work also devises a strategy for calculating PLS reduced spaces when no training data is available. The approach produces increasingly precise visual mappings as the user feeds back his or her knowledge and is capable of working with small and unbalanced training sets.
Resumo:
This study presents the strategies for prevention and early detection of oral cancer by means of screening in the elderly population of São Paulo, the richest and the most populous state of Brazil. This research was a retrospective longitudinal study based on the analysis of secondary data. The variables - number of participating cities, coverage of screening, and number of suspicious and confirmed cases of oral cancer - were divided into two periods: 2001-2004 and 2005-2008. Data were analyzed statistically by the chi-square test at 5% significance level. The implementation of a nationwide public oral health policy in 2004 and the reorganization of the secondary and tertiary health care were evaluated as mediator factors able to interfere in the achieved outcomes. From 2001 to 2008, 2,229,273 oral examinations were performed. There was an addition of 205 participating cities by the end of the studied period (p<0.0001). The coverage of oral cancer screening increased from 4.1% to 16% (p<0.0001). There was a decrease in the number of suspicious lesions (from 9% in 2005 to 5% in 2008) (p<0.0001) and in the rate of confirmed oral cancer cases per 100,000 examinations (from 20.89 in 2001 to 10.40 in 2008) (p<0.0001). After 8 years of screening, there was a decrease in the number of suspicious lesions and confirmed cases of oral cancer in the population. The reorganization of secondary and tertiary health care levels of oral care seems to have contributed to modify these numbers, having a positive impact on the outcomes of oral cancer screening in the São Paulo State.