950 resultados para Mathematical and statistical techniques


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Six Sigma methodology has received considerable attention in the last two decades. This is due to its great potencial to reduce processes variability, through the use of accurate data, facts and statistical techniques. The methodology seeks to improve the quality of products and services, maximizing the company s financial performance. Specifically, its implementation and results in medium-sized textile enterprises is unknow, although there are signs that the methodology can be applied with success. Considering this scenario, the goal of this research is to describe the application of the Six Sigma methodology in a médium-sized textile company specialized in the production of male shirts in the satate of Rio Grande do Norte, Brazil. First, we present a literature review, seeking to highlight the themes of quality, Six Sigma and its methodology for improvement. Then, we show the implementation of the project selected, depicting the steps and procedures that must be performed. The results confirm the efficiency of Six Sigma in providing significant gains to companies. It is observed substantial improvements in the speed of product development and the flexibility of the parts produced, reducing the process lead time from 12.5 to 6.2 days, which means a performance improvement of over 50%. This leads also to cultural and behaviour change, creating motivation for implementation of new projects and a continuous search for knowledge

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract Background Several mathematical and statistical methods have been proposed in the last few years to analyze microarray data. Most of those methods involve complicated formulas, and software implementations that require advanced computer programming skills. Researchers from other areas may experience difficulties when they attempting to use those methods in their research. Here we present an user-friendly toolbox which allows large-scale gene expression analysis to be carried out by biomedical researchers with limited programming skills. Results Here, we introduce an user-friendly toolbox called GEDI (Gene Expression Data Interpreter), an extensible, open-source, and freely-available tool that we believe will be useful to a wide range of laboratories, and to researchers with no background in Mathematics and Computer Science, allowing them to analyze their own data by applying both classical and advanced approaches developed and recently published by Fujita et al. Conclusion GEDI is an integrated user-friendly viewer that combines the state of the art SVR, DVAR and SVAR algorithms, previously developed by us. It facilitates the application of SVR, DVAR and SVAR, further than the mathematical formulas present in the corresponding publications, and allows one to better understand the results by means of available visualizations. Both running the statistical methods and visualizing the results are carried out within the graphical user interface, rendering these algorithms accessible to the broad community of researchers in Molecular Biology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This Thesis focuses on the X-ray study of the inner regions of Active Galactic Nuclei, in particular on the formation of high velocity winds by the accretion disk itself. Constraining AGN winds physical parameters is of paramount importance both for understanding the physics of the accretion/ejection flow onto supermassive black holes, and for quantifying the amount of feedback between the SMBH and its environment across the cosmic time. The sources selected for the present study are BAL, mini-BAL, and NAL QSOs, known to host high-velocity winds associated to the AGN nuclear regions. Observationally, a three-fold strategy has been adopted: - substantial samples of distant sources have been analyzed through spectral, photometric, and statistical techniques, to gain insights into their mean properties as a population; - a moderately sized sample of bright sources has been studied through detailed X-ray spectral analysis, to give a first flavor of the general spectral properties of these sources, also from a temporally resolved point of view; - the best nearby candidate has been thoroughly studied using the most sophisticated spectral analysis techniques applied to a large dataset with a high S/N ratio, to understand the details of the physics of its accretion/ejection flow. There are three main channels through which this Thesis has been developed: - [Archival Studies]: the XMM-Newton public archival data has been extensively used to analyze both a large sample of distant BAL QSOs, and several individual bright sources, either BAL, mini-BAL, or NAL QSOs. - [New Observational Campaign]: I proposed and was awarded with new X-ray pointings of the mini-BAL QSOs PG 1126-041 and PG 1351+640 during the XMM-Newton AO-7 and AO-8. These produced the biggest X-ray observational campaign ever made on a mini-BAL QSO (PG 1126-041), including the longest exposure so far. Thanks to the exceptional dataset, a whealth of informations have been obtained on both the intrinsic continuum and on the complex reprocessing media that happen to be in the inner regions of this AGN. Furthermore, the temporally resolved X-ray spectral analysis field has been finally opened for mini-BAL QSOs. - [Theoretical Studies]: some issues about the connection between theories and observations of AGN accretion disk winds have been investigated, through theoretical arguments and synthetic absorption line profiles studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The report explores the problem of detecting complex point target models in a MIMO radar system. A complex point target is a mathematical and statistical model for a radar target that is not resolved in space, but exhibits varying complex reflectivity across the different bistatic view angles. The complex reflectivity can be modeled as a complex stochastic process whose index set is the set of all the bistatic view angles, and the parameters of the stochastic process follow from an analysis of a target model comprising a number of ideal point scatterers randomly located within some radius of the targets center of mass. The proposed complex point targets may be applicable to statistical inference in multistatic or MIMO radar system. Six different target models are summarized here – three 2-dimensional (Gaussian, Uniform Square, and Uniform Circle) and three 3-dimensional (Gaussian, Uniform Cube, and Uniform Sphere). They are assumed to have different distributions on the location of the point scatterers within the target. We develop data models for the received signals from such targets in the MIMO radar system with distributed assets and partially correlated signals, and consider the resulting detection problem which reduces to the familiar Gauss-Gauss detection problem. We illustrate that the target parameter and transmit signal have an influence on the detector performance through target extent and the SNR respectively. A series of the receiver operator characteristic (ROC) curves are generated to notice the impact on the detector for varying SNR. Kullback–Leibler (KL) divergence is applied to obtain the approximate mean difference between density functions the scatterers assume inside the target models to show the change in the performance of the detector with target extent of the point scatterers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Transparent and translucent objects involve both light reflection and transmission at surfaces. This paper presents a physically based transmission model of rough surface. The surface is assumed to be locally smooth, and statistical techniques is applied to calculate light transmission through a local illumination area. We have obtained an analytical expression for single scattering. The analytical model has been compared to our Monte Carlo simulations as well as to the previous simulations, and good agreements have been achieved. The presented model has potential applications for realistic rendering of transparent and translucent objects.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Rapid industrialization and urbanization in developing countries has led to an increase in air pollution, along a similar trajectory to that previously experienced by the developed nations. In China, particulate pollution is a serious environmental problem that is influencing air quality, regional and global climates, and human health. In response to the extremely severe and persistent haze pollution experienced by about 800 million people during the first quarter of 2013 (refs 4, 5), the Chinese State Council announced its aim to reduce concentrations of PM2.5 (particulate matter with an aerodynamic diameter less than 2.5micrometres) by up to 25 per cent relative to 2012 levels by 2017 (ref. 6). Such efforts however require elucidation of the factors governing the abundance and composition of PM2.5, which remain poorly constrained in China. Here we combine a comprehensive set of novel and state-of-the-art offline analytical approaches and statistical techniques to investigate the chemical nature and sources of particulate matter at urban locations in Beijing, Shanghai, Guangzhou and Xi'an during January 2013. We find that the severe haze pollution event was driven to a large extent by secondary aerosol formation, which contributed 30-77 per cent and 44-71 per cent (average for all four cities) of PM2.5 and of organic aerosol, respectively. On average, the contribution of secondary organic aerosol (SOA) and secondary inorganic aerosol (SIA) are found to be of similar importance (SOA/SIA ratios range from 0.6 to 1.4). Our results suggest that, in addition to mitigating primary particulate emissions, reducing the emissions of secondary aerosol precursors from, for example, fossil fuel combustion and biomass burning is likely to be important for controlling China's PM2.5 levels and for reducing the environmental, economic and health impacts resulting from particulate pollution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El aprendizaje automático y la cienciometría son las disciplinas científicas que se tratan en esta tesis. El aprendizaje automático trata sobre la construcción y el estudio de algoritmos que puedan aprender a partir de datos, mientras que la cienciometría se ocupa principalmente del análisis de la ciencia desde una perspectiva cuantitativa. Hoy en día, los avances en el aprendizaje automático proporcionan las herramientas matemáticas y estadísticas para trabajar correctamente con la gran cantidad de datos cienciométricos almacenados en bases de datos bibliográficas. En este contexto, el uso de nuevos métodos de aprendizaje automático en aplicaciones de cienciometría es el foco de atención de esta tesis doctoral. Esta tesis propone nuevas contribuciones en el aprendizaje automático que podrían arrojar luz sobre el área de la cienciometría. Estas contribuciones están divididas en tres partes: Varios modelos supervisados (in)sensibles al coste son aprendidos para predecir el éxito científico de los artículos y los investigadores. Los modelos sensibles al coste no están interesados en maximizar la precisión de clasificación, sino en la minimización del coste total esperado derivado de los errores ocasionados. En este contexto, los editores de revistas científicas podrían disponer de una herramienta capaz de predecir el número de citas de un artículo en el fututo antes de ser publicado, mientras que los comités de promoción podrían predecir el incremento anual del índice h de los investigadores en los primeros años. Estos modelos predictivos podrían allanar el camino hacia nuevos sistemas de evaluación. Varios modelos gráficos probabilísticos son aprendidos para explotar y descubrir nuevas relaciones entre el gran número de índices bibliométricos existentes. En este contexto, la comunidad científica podría medir cómo algunos índices influyen en otros en términos probabilísticos y realizar propagación de la evidencia e inferencia abductiva para responder a preguntas bibliométricas. Además, la comunidad científica podría descubrir qué índices bibliométricos tienen mayor poder predictivo. Este es un problema de regresión multi-respuesta en el que el papel de cada variable, predictiva o respuesta, es desconocido de antemano. Los índices resultantes podrían ser muy útiles para la predicción, es decir, cuando se conocen sus valores, el conocimiento de cualquier valor no proporciona información sobre la predicción de otros índices bibliométricos. Un estudio bibliométrico sobre la investigación española en informática ha sido realizado bajo la cultura de publicar o morir. Este estudio se basa en una metodología de análisis de clusters que caracteriza la actividad en la investigación en términos de productividad, visibilidad, calidad, prestigio y colaboración internacional. Este estudio también analiza los efectos de la colaboración en la productividad y la visibilidad bajo diferentes circunstancias. ABSTRACT Machine learning and scientometrics are the scientific disciplines which are covered in this dissertation. Machine learning deals with the construction and study of algorithms that can learn from data, whereas scientometrics is mainly concerned with the analysis of science from a quantitative perspective. Nowadays, advances in machine learning provide the mathematical and statistical tools for properly working with the vast amount of scientometrics data stored in bibliographic databases. In this context, the use of novel machine learning methods in scientometrics applications is the focus of attention of this dissertation. This dissertation proposes new machine learning contributions which would shed light on the scientometrics area. These contributions are divided in three parts: Several supervised cost-(in)sensitive models are learned to predict the scientific success of articles and researchers. Cost-sensitive models are not interested in maximizing classification accuracy, but in minimizing the expected total cost of the error derived from mistakes in the classification process. In this context, publishers of scientific journals could have a tool capable of predicting the citation count of an article in the future before it is published, whereas promotion committees could predict the annual increase of the h-index of researchers within the first few years. These predictive models would pave the way for new assessment systems. Several probabilistic graphical models are learned to exploit and discover new relationships among the vast number of existing bibliometric indices. In this context, scientific community could measure how some indices influence others in probabilistic terms and perform evidence propagation and abduction inference for answering bibliometric questions. Also, scientific community could uncover which bibliometric indices have a higher predictive power. This is a multi-output regression problem where the role of each variable, predictive or response, is unknown beforehand. The resulting indices could be very useful for prediction purposes, that is, when their index values are known, knowledge of any index value provides no information on the prediction of other bibliometric indices. A scientometric study of the Spanish computer science research is performed under the publish-or-perish culture. This study is based on a cluster analysis methodology which characterizes the research activity in terms of productivity, visibility, quality, prestige and international collaboration. This study also analyzes the effects of collaboration on productivity and visibility under different circumstances.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La valoración de inmuebles urbanos y más cuando se afronta desde un punto de vista masivo, no es una actividad sencilla. Tanto la legislación vigente en España como los estándares de valoración internacionales establecen que los valores deben de estar referenciados al valor de mercado, pero el mercado inmobiliario se caracteriza por su limitada transparencia y porque el producto es relativamente ilíquido. En este contexto, parece necesario acometer el estudio de nuevas herramientas que faciliten el establecer con mayor seguridad el valor de los inmuebles. El análisis de los factores que determinan el precio de los inmuebles permite identificar aquellas características que más inciden en el mismo, como son su tamaño, uso, tipología, calidad, antigüedad y localización. A partir de ellas y a través del estudio de la estructura urbana, localizando las zonas homogéneas y analizando las variables de su producto inmobiliario, se ha desarrollado una nueva metodología basada en el tipo edificatorio como estrategia para la valoración territorial. A lo largo de este trabajo, cuyo ámbito de análisis se ha centrado en los municipios de la Comunidad de Madrid, mediante el análisis comparado de sus características, se va a exponer cómo el tipo de estructura urbana influye significativamente en la calidad de los resultados que se obtienen. También se va a incidir en la sensibilidad de los mismos a los diferentes métodos de tratamiento de datos y de análisis matemático y estadístico. Con todo, se puede afirmar que la utilización de la metodología que se propone facilita, mejora y apoya la valoración de inmuebles, siendo posible su aplicación directa tanto para la valoración masiva de inmuebles como en la individualizada. ABSTRACT The valuation of urban property and more so when one is confronted with it from a massive point of view, is not an easy task. Taking into consideration Spain‟s current regulations as well as the international valuation standards, they establish that the values must be referred to the market value, but the real-estate market is characterised by its limited transparency and because the product is relatively illiquid. Under these circumstances, it seems necessary to undertake the study of new tools that facilitate the obtention of more accurate and secure valuation of real estate assets. The analysis of the factors that determine the price of property allow us to identify those characteristics that influence it most, such as size, use, typology, quality, age and location. Taking these points into consideration and through the study of urban structure, localising the homogeneous areas and analysing the variables of its real-estate product, a new methodology has been developed based on the type of building as well as on the local valuation strategy. Throughout this work, whose scope of analysis has been focussed on the municipalities of the Autonomous Region of Madrid through a comparative analysis of its characteristics, it will be shown how the type of urban structure can significantly influence the quality of the results that are obtained. It will also affect their sensitivity to the different methods of data processing, and of mathematical and statistical analysis. In all, one can confirm that using the methodology that is being proposed facilitates, improves and supports the valuation of properties, enabling its direct application for the mass valuation of property as well as for the individual one.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Electrical and magnetic brain waves of two subjects were recorded for the purpose of recognizing which one of 12 sentences or seven words auditorily presented was processed. The analysis consisted of averaging over trials to create prototypes and test samples, to each of which a Fourier transform was applied, followed by filtering and an inverse transformation to the time domain. The filters used were optimal predictive filters, selected for each subject. A still further improvement was obtained by taking differences between recordings of two electrodes to obtain bipolar pairs that then were used for the same analysis. Recognition rates, based on a least-squares criterion, varied, but the best were above 90%. The first words of prototypes of sentences also were cut and pasted to test, at least partially, the invariance of a word’s brain wave in different sentence contexts. The best result was above 80% correct recognition. Test samples made up only of individual trials also were analyzed. The best result was 134 correct of 288 (47%), which is promising, given that the expected recognition number by chance is just 24 (or 8.3%). The work reported in this paper extends our earlier work on brain-wave recognition of words only. The recognition rates reported here further strengthen the case that recordings of electric brain waves of words or sentences, together with extensive mathematical and statistical analysis, can be the basis of new developments in our understanding of brain processing of language.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 62P10, 92C20

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The application of pharmacokinetic modelling within the drug development field essentially allows one to develop a quantitative description of the temporal behaviour of a compound of interest at a tissue/organ level, by identifying and defining relationships between a dose of a drug and dependent variables. In order to understand and characterise the pharmacokinetics of a drug, it is often helpful to employ pharmacokinetic modelling using empirical or mechanistic approaches. Pharmacokinetic models can be developed within mathematical and statistical commercial software such as MATLAB using traditional mathematical and computation coding, or by using the Simbiology Toolbox available within MATLAB for a graphical user interface approach to developing pharmacokinetic (PBPK) models. For formulations dosed orally, a prerequisite for clinical activity is the entry of the drug into the systemic circulation.