878 resultados para regression discrete models
Resumo:
This study constructs performance prediction models to estimate the end-user perceived video quality on mobile devices for the latest video encoding techniques –VP9 and H.265. Both subjective and objective video quality assessments were carried out for collecting data and selecting the most desirable predictors. Using statistical regression, two models were generated to achieve 94.5% and 91.5% of prediction accuracies respectively, depending on whether the predictor derived from the objective assessment is involved. These proposed models can be directly used by media industries for video quality estimation, and will ultimately help them to ensure a positive end-user quality of experience on future mobile devices after the adaptation of the latest video encoding technologies.
Resumo:
We consider estimating the total load from frequent flow data but less frequent concentration data. There are numerous load estimation methods available, some of which are captured in various online tools. However, most estimators are subject to large biases statistically, and their associated uncertainties are often not reported. This makes interpretation difficult and the estimation of trends or determination of optimal sampling regimes impossible to assess. In this paper, we first propose two indices for measuring the extent of sampling bias, and then provide steps for obtaining reliable load estimates that minimizes the biases and makes use of informative predictive variables. The key step to this approach is in the development of an appropriate predictive model for concentration. This is achieved using a generalized rating-curve approach with additional predictors that capture unique features in the flow data, such as the concept of the first flush, the location of the event on the hydrograph (e.g. rise or fall) and the discounted flow. The latter may be thought of as a measure of constituent exhaustion occurring during flood events. Forming this additional information can significantly improve the predictability of concentration, and ultimately the precision with which the pollutant load is estimated. We also provide a measure of the standard error of the load estimate which incorporates model, spatial and/or temporal errors. This method also has the capacity to incorporate measurement error incurred through the sampling of flow. We illustrate this approach for two rivers delivering to the Great Barrier Reef, Queensland, Australia. One is a data set from the Burdekin River, and consists of the total suspended sediment (TSS) and nitrogen oxide (NO(x)) and gauged flow for 1997. The other dataset is from the Tully River, for the period of July 2000 to June 2008. For NO(x) Burdekin, the new estimates are very similar to the ratio estimates even when there is no relationship between the concentration and the flow. However, for the Tully dataset, by incorporating the additional predictive variables namely the discounted flow and flow phases (rising or recessing), we substantially improved the model fit, and thus the certainty with which the load is estimated.
Resumo:
Objective To identify measures that most closely relate to hydration in healthy Brahman-cross neonatal calves that experience milk deprivation. Methods In a dry tropical environment, eight neonatal Brahman-cross calves were prevented from suckling for 2–3 days during which measurements were performed twice daily. Results Mean body water, as estimated by the mean urea space, was 74 ± 3% of body weight at full hydration. The mean decrease in hydration was 7.3 ± 1.1% per day. The rate of decrease was more than three-fold higher during the day than at night. At an ambient temperature of 39°C, the decrease in hydration averaged 1.1% hourly. Measures that were most useful in predicting the degree of hydration in both simple and multiple-regression prediction models were body weight, hindleg length, girth, ambient and oral temperatures, eyelid tenting, alertness score and plasma sodium. These parameters are different to those recommended for assessing calves with diarrhoea. Single-measure predictions had a standard error of at least 5%, which reduced to 3–4% if multiple measures were used. Conclusion We conclude that simple assessment of non-suckling Brahman-cross neonatal calves can estimate the severity of dehydration, but the estimates are imprecise. Dehydration in healthy neonatal calves that do not have access to milk can exceed 20% (>15% weight loss) in 1–3 days under tropical conditions and at this point some are unable to recover without clinical intervention.
Resumo:
In the context of increasing threats to the sensitive marine ecosystem by toxic metals, this study investigated the metal build-up on impervious surfaces specific to commercial seaports. The knowledge generated in this study will contribute to managing toxic metal pollution of the marine ecosystem. The study found that inter-modal operations and main access roadway had the highest loads followed by container storage and vehicle marshalling sites, while the quay line and short term storage areas had the lowest. Additionally, it was found that Cr, Al, Pb, Cu and Zn were predominantly attached to solids, while significant amount of Cu, Pb and Zn were found as nutrient complexes. As such, treatment options based on solids retention can be effective for some metal species, while ineffective for other species. Furthermore, Cu and Zn are more likely to become bioavailable in seawater due to their strong association with nutrients. Mathematical models to replicate the metal build-up process were also developed using experimental design approach and partial least square regression. The models for Cr and Pb were found to be reliable, while those for Al, Zn and Cu were relatively less reliable, but could be employed for preliminary investigations.
Resumo:
Detecting Earnings Management Using Neural Networks. Trying to balance between relevant and reliable accounting data, generally accepted accounting principles (GAAP) allow, to some extent, the company management to use their judgment and to make subjective assessments when preparing financial statements. The opportunistic use of the discretion in financial reporting is called earnings management. There have been a considerable number of suggestions of methods for detecting accrual based earnings management. A majority of these methods are based on linear regression. The problem with using linear regression is that a linear relationship between the dependent variable and the independent variables must be assumed. However, previous research has shown that the relationship between accruals and some of the explanatory variables, such as company performance, is non-linear. An alternative to linear regression, which can handle non-linear relationships, is neural networks. The type of neural network used in this study is the feed-forward back-propagation neural network. Three neural network-based models are compared with four commonly used linear regression-based earnings management detection models. All seven models are based on the earnings management detection model presented by Jones (1991). The performance of the models is assessed in three steps. First, a random data set of companies is used. Second, the discretionary accruals from the random data set are ranked according to six different variables. The discretionary accruals in the highest and lowest quartiles for these six variables are then compared. Third, a data set containing simulated earnings management is used. Both expense and revenue manipulation ranging between -5% and 5% of lagged total assets is simulated. Furthermore, two neural network-based models and two linear regression-based models are used with a data set containing financial statement data from 110 failed companies. Overall, the results show that the linear regression-based models, except for the model using a piecewise linear approach, produce biased estimates of discretionary accruals. The neural network-based model with the original Jones model variables and the neural network-based model augmented with ROA as an independent variable, however, perform well in all three steps. Especially in the second step, where the highest and lowest quartiles of ranked discretionary accruals are examined, the neural network-based model augmented with ROA as an independent variable outperforms the other models.
Resumo:
This paper deals with the quasi-static and dynamic mechanical analysis of montmorillonite filled polypropylene composites. Nanocomposites were prepared by blending montmorillonite (nanoclay) varying from 3 to 9% by weight with polypropylene. The dynamic mechanical properties such as storage modulus, loss modulus and mechanical loss factor of PP and nano-composites were investigated by varying temperature and frequencies. Results showed better mechanical and thermomechanical properties at higher concentration of nanoclay. Regression-based models through design of experiments (DOE) were developed to find the storage modulus and compared with theoretical models and DOE-based models.
Resumo:
The healing times for the growth of thin films on patterned substrates are studied using simulations of two discrete models of surface growth: the Family model and the Das Sarma-Tamborenea (DT) model. The healing time, defined as the time at which the characteristics of the growing interface are ``healed'' to those obtained in growth on a flat substrate, is determined via the study of the nearest-neighbor height difference correlation function. Two different initial patterns are considered in this work: a relatively smooth tent-shaped triangular substrate and an atomically rough substrate with singlesite pillars or grooves. We find that the healing time of the Family and DT models on aL x L triangular substrate is proportional to L-z, where z is the dynamical exponent of the models. For the Family model, we also analyze theoretically, using a continuum description based on the linear Edwards-Wilkinson equation, the time evolution of the nearest-neighbor height difference correlation function in this system. The correlation functions obtained from continuum theory and simulation are found to be consistent with each other for the relatively smooth triangular substrate. For substrates with periodic and random distributions of pillars or grooves of varying size, the healing time is found to increase linearly with the height (depth) of pillars (grooves). We show explicitly that the simulation data for the Family model grown on a substrate with pillars or grooves do not agree with results of a calculation based on the continuum Edwards-Wilkinson equation. This result implies that a continuum description does not work when the initial pattern is atomically rough. The observed dependence of the healing time on the substrate size and the initial height (depth) of pillars (grooves) can be understood from the details of the diffusion rule of the atomistic model. The healing time of both models for pillars is larger than that for grooves with depth equal to the height of the pillars. The calculated healing time for both Family and DT models is found to depend on how the pillars and grooves are distributed over the substrate. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
The beam lattice-type models, such as the Euler-Bernoulli (or Timoshenko) beam lattice and the generalized beam (GB) lattice, have been proved very effective in simulating failure processes in concrete and rock due to its simplicity and easy implementation. However, these existing lattice models only take into account tensile failures, so it may be not applicable to simulation of failure behaviors under compressive states. The main aim in this paper is to incorporate Mohr-Coulomb failure criterion, which is widely used in many kinds of materials, into the GB lattice procedure. The improved GB lattice procedure has the capability of modeling both element failures and contact/separation of cracked elements. The numerical examples show its effectiveness in simulating compressive failures. Furthermore, the influences of lateral confinement, friction angle, stiffness of loading platen, inclusion of aggregates on failure processes are respectively analyzed in detail.
Resumo:
Diversas aplicações industriais relevantes envolvem os processos de adsorção, citando como exemplos a purificação de produtos, separação de substâncias, controle de poluição e umidade entre outros. O interesse crescente pelos processos de purificação de biomoléculas deve-se principalmente ao desenvolvimento da biotecnologia e à demanda das indústrias farmacêutica e química por produtos com alto grau de pureza. O leito móvel simulado (LMS) é um processo cromatográfico contínuo que tem sido aplicado para simular o movimento do leito de adsorvente, de forma contracorrente ao movimento do líquido, através da troca periódica das posições das correntes de entrada e saída, sendo operado de forma contínua, sem prejuízo da pureza das correntes de saída. Esta consiste no extrato, rico no componente mais fortemente adsorvido, e no rafinado, rico no componente mais fracamente adsorvido, sendo o processo particularmente adequado a separações binárias. O objetivo desta tese é estudar e avaliar diferentes abordagens utilizando métodos estocásticos de otimização para o problema inverso dos fenômenos envolvidos no processo de separação em LMS. Foram utilizados modelos discretos com diferentes abordagens de transferência de massa, com a vantagem da utilização de um grande número de pratos teóricos em uma coluna de comprimento moderado, neste processo a separação cresce à medida que os solutos fluem através do leito, isto é, ao maior número de vezes que as moléculas interagem entre a fase móvel e a fase estacionária alcançando assim o equilíbrio. A modelagem e a simulação verificadas nestas abordagens permitiram a avaliação e a identificação das principais características de uma unidade de separação do LMS. A aplicação em estudo refere-se à simulação de processos de separação do Baclofen e da Cetamina. Estes compostos foram escolhidos por estarem bem caracterizados na literatura, estando disponíveis em estudos de cinética e de equilíbrio de adsorção nos resultados experimentais. De posse de resultados experimentais avaliou-se o comportamento do problema direto e inverso de uma unidade de separação LMS visando comparar os resultados obtidos com os experimentais, sempre se baseando em critérios de eficiência de separação entre as fases móvel e estacionária. Os métodos estudados foram o GA (Genetic Algorithm) e o PCA (Particle Collision Algorithm) e também foi feita uma hibridização entre o GA e o PCA. Como resultado desta tese analisouse e comparou-se os métodos de otimização em diferentes aspectos relacionados com o mecanismo cinético de transferência de massa por adsorção e dessorção entre as fases sólidas do adsorvente.
Resumo:
The ultimate objective of the research conducted by the authors is to explore the feasibility of determining reliable in situ values of soil modulus as a function of strain. In field experiments, an excitation is applied on the ground surface using large-scale shakers, and the response of the soil deposit is recorded through receivers embedded in the soil. The focus of this paper is on the simulation and observation of signals that would be recorded at the receiver locations under idealized conditions to provide guidelines on the interpretation of the field measurements. Discrete models are used to reproduce one-dimensional and three-dimensional geometries. When the first times of arrival are detected by receivers under the vertical impulse, they coincide with the arrival of the P wave; therefore related to the constrained modulus of the material. If one considers, on the other hand, phase differences between the motions at two receivers, the picture is far more complicated and one would obtain propagation velocities, function of frequency and measuring location, which do not correspond to either the constrained modulus or Young's modulus. It is necessary then to conduct more rigorous and complicated analyses in order to interpret the data. This paper discusses and illustrates these points. Copyright © 2008 John Wiley & Sons, Ltd.
Resumo:
PURPOSE: The role of PM10 in the development of allergic diseases remains controversial among epidemiological studies, partly due to the inability to control for spatial variations in large-scale risk factors. This study aims to investigate spatial correspondence between the level of PM10 and allergic diseases at the sub-district level in Seoul, Korea, in order to evaluate whether the impact of PM10 is observable and spatially varies across the subdistricts. METHODS: PM10 measurements at 25 monitoring stations in the city were interpolated to 424 sub-districts where annual inpatient and outpatient count data for 3 types of allergic diseases (atopic dermatitis, asthma, and allergic rhinitis) were collected. We estimated multiple ordinary least square regression models to examine the association of the PM10 level with each of the allergic diseases, controlling for various sub-district level covariates. Geographically weighted regression (GWR) models were conducted to evaluate how the impact of PM10 varies across the sub-districts. RESULTS: PM10 was found to be a significant predictor of atopic dermatitis patient count (P<0.01), with greater association when spatially interpolated at the sub-district level. No significant effect of PM10 was observed on allergic rhinitis and asthma when socioeconomic factors were controlled for. GWR models revealed spatial variation of PM10 effects on atopic dermatitis across the sub-districts in Seoul. The relationship of PM10 levels to atopic dermatitis patient counts is found to be significant only in the Gangbuk region (P<0.01), along with other covariates including average land value, poverty rate, level of education and apartment rate (P<0.01). CONCLUSIONS: Our findings imply that PM10 effects on allergic diseases might not be consistent throughout Seoul. GIS-based spatial modeling techniques could play a role in evaluating spatial variation of air pollution impacts on allergic diseases at the sub-district level, which could provide valuable guidelines for environmental and public health policymakers.
Resumo:
Os avanços tecnológicos e científicos, na área da saúde, têm vindo a aliar áreas como a Medicina e a Matemática, cabendo à ciência adequar de forma mais eficaz os meios de investigação, diagnóstico, monitorização e terapêutica. Os métodos desenvolvidos e os estudos apresentados nesta dissertação resultam da necessidade de encontrar respostas e soluções para os diferentes desafios identificados na área da anestesia. A índole destes problemas conduz, necessariamente, à aplicação, adaptação e conjugação de diferentes métodos e modelos das diversas áreas da matemática. A capacidade para induzir a anestesia em pacientes, de forma segura e confiável, conduz a uma enorme variedade de situações que devem ser levadas em conta, exigindo, por isso, intensivos estudos. Assim, métodos e modelos de previsão, que permitam uma melhor personalização da dosagem a administrar ao paciente e por monitorizar, o efeito induzido pela administração de cada fármaco, com sinais mais fiáveis, são fundamentais para a investigação e progresso neste campo. Neste contexto, com o objetivo de clarificar a utilização em estudos na área da anestesia de um ajustado tratamento estatístico, proponho-me abordar diferentes análises estatísticas para desenvolver um modelo de previsão sobre a resposta cerebral a dois fármacos durante sedação. Dados obtidos de voluntários serão utilizados para estudar a interação farmacodinâmica entre dois fármacos anestésicos. Numa primeira fase são explorados modelos de regressão lineares que permitam modelar o efeito dos fármacos no sinal cerebral BIS (índice bispectral do EEG – indicador da profundidade de anestesia); ou seja estimar o efeito que as concentrações de fármacos têm na depressão do eletroencefalograma (avaliada pelo BIS). Na segunda fase deste trabalho, pretende-se a identificação de diferentes interações com Análise de Clusters bem como a validação do respetivo modelo com Análise Discriminante, identificando grupos homogéneos na amostra obtida através das técnicas de agrupamento. O número de grupos existentes na amostra foi, numa fase exploratória, obtido pelas técnicas de agrupamento hierárquicas, e a caracterização dos grupos identificados foi obtida pelas técnicas de agrupamento k-means. A reprodutibilidade dos modelos de agrupamento obtidos foi testada através da análise discriminante. As principais conclusões apontam que o teste de significância da equação de Regressão Linear indicou que o modelo é altamente significativo. As variáveis propofol e remifentanil influenciam significativamente o BIS e o modelo melhora com a inclusão do remifentanil. Este trabalho demonstra ainda ser possível construir um modelo que permite agrupar as concentrações dos fármacos, com base no efeito no sinal cerebral BIS, com o apoio de técnicas de agrupamento e discriminantes. Os resultados desmontram claramente a interacção farmacodinâmica dos dois fármacos, quando analisamos o Cluster 1 e o Cluster 3. Para concentrações semelhantes de propofol o efeito no BIS é claramente diferente dependendo da grandeza da concentração de remifentanil. Em suma, o estudo demostra claramente, que quando o remifentanil é administrado com o propofol (um hipnótico) o efeito deste último é potenciado, levando o sinal BIS a valores bastante baixos.
Resumo:
Vitamin D metabolites are important in the regulation of bone and calcium homeostasis, but also have a more ubiquitous role in the regulation of cell differentiation and immune function. Severely low circulating 25-dihydroxyvitamin D [25(OH)D] concentrations have been associated with the onset of active tuberculosis (TB) in immigrant populations, although the association with latent TB infection (LTBI) has not received much attention. A previous study identified the prevalence of LTBI among a sample of Mexican migrant workers enrolled in Canada's Seasonal Agricultural Workers Program (SA WP) in the Niagara Region of Ontario. The aim of the present study was to determine the vitamin D status of the same sample, and identify if a relationship existed with LTBI. Studies of vitamin D deficiency and active TB are most commonly carried out among immigrant populations to non-endemic regions, in which reactivation of LTBI has occurred. Currently, there is limited knowledge of the association between vitamin D deficiency and LTBI. Entry into Canada ensured that these individuals did not have active TB, and L TBI status was established previously by an interferon-gamma release assay (IGRA) (QuantiFERON-TB Gold In-Tube®, Cellestis Ltd., Australia). Awareness of vitamin D status may enable individuals at risk of deficiency to improve their nutritional health, and those with LTBI to be aware of this risk factor for disease. Prevalence of vitamin D insufficiency among the Mexican migrant workers was determined from serum samples collected in the summer of 2007 as part of the cross sectional LTBI study. Samples were measured for concentrations of the main circulating vitamin D metabolite, 25(OH)D, with a widely used 1251 250HD RIA (DiaSorin Inc.®, Stillwater, MN), and were categorized as deficient «37.5 nmoI/L), insufficient (>37.5 nmollL, < 80 nmol/L) or sufficient (2::80 nmoI/L). Fisher's exact tests and t tests were used to determine if vitamin D status (sufficiency or insufficiency) or 25(OH)D concentrations significantly differed by sex or age categories. Predictors of vitamin D insufficiency and 25(OH)D concentrations were taken from questionnaires carried out during the previous study, and analyzed in the present study using multiple regression prediction models. Fisher's exact test and t test was used to determine if vitamin D status or 25(OH)D concentration differed by LTBI status. Strength of the relationship between interferongamma (IFN-y) concentration (released by peripheral T cells in response to TB antigens) and 25(OH)D concentration was analyzed using a Spearman correlation. Out of 87 participants included in the study (78% male; mean age 38 years), 14 were identified as LTBI positive but none had any signs or symptoms of TB reactivation. Only 30% of the participants were vitamin D sufficient, whereas 68% were insufficient and 2% were deficient. Significant independent predictors of lower 25(OH)D concentrations were sex, number of years enrolled in the SA WP and length of stay in Canada. No significant differences were found between 25(OH)D concentrations and LTBI status. There was a significant moderate correlation between IFN-y and 25(OH)D concentrations ofLTBI-positive individuals. The majority of participants presented with Vitamin D insufficiency but none were severely deficient, indicating that 25(OH)D concentrations do not decrease dramatically in populations who temporarily reside in Canada but go back to their countries of origin during the Canadian winter. This study did not find a statistical relationship between low levels of vitamin D and LTBI which suggests that in the presence of overall good health, lower than ideal levels of 2S(OH)D, may still be exerting a protective immunological effect against LTBI reactivation. The challenge remains to determine a critical 2S(OH)D concentration at which reactivation is more likely to occur.
Resumo:
In the context of multivariate linear regression (MLR) models, it is well known that commonly employed asymptotic test criteria are seriously biased towards overrejection. In this paper, we propose a general method for constructing exact tests of possibly nonlinear hypotheses on the coefficients of MLR systems. For the case of uniform linear hypotheses, we present exact distributional invariance results concerning several standard test criteria. These include Wilks' likelihood ratio (LR) criterion as well as trace and maximum root criteria. The normality assumption is not necessary for most of the results to hold. Implications for inference are two-fold. First, invariance to nuisance parameters entails that the technique of Monte Carlo tests can be applied on all these statistics to obtain exact tests of uniform linear hypotheses. Second, the invariance property of the latter statistic is exploited to derive general nuisance-parameter-free bounds on the distribution of the LR statistic for arbitrary hypotheses. Even though it may be difficult to compute these bounds analytically, they can easily be simulated, hence yielding exact bounds Monte Carlo tests. Illustrative simulation experiments show that the bounds are sufficiently tight to provide conclusive results with a high probability. Our findings illustrate the value of the bounds as a tool to be used in conjunction with more traditional simulation-based test methods (e.g., the parametric bootstrap) which may be applied when the bounds are not conclusive.
Resumo:
Objectif: Évaluer l'efficacité du dépistage de l’hypertension gestationnelle par les caractéristiques démographiques maternelles, les biomarqueurs sériques et le Doppler de l'artère utérine au premier et au deuxième trimestre de grossesse. Élaborer des modèles prédictifs de l’hypertension gestationnelle fondées sur ces paramètres. Methods: Il s'agit d'une étude prospective de cohorte incluant 598 femmes nullipares. Le Doppler utérin a été étudié par échographie transabdominale entre 11 +0 à 13 +6 semaines (1er trimestre) et entre 17 +0 à 21 +6 semaines (2e trimestre). Tous les échantillons de sérum pour la mesure de plusieurs biomarqueurs placentaires ont été recueillis au 1er trimestre. Les caractéristiques démographiques maternelles ont été enregistrées en même temps. Des courbes ROC et les valeurs prédictives ont été utilisés pour analyser la puissance prédictive des paramètres ci-dessus. Différentes combinaisons et leurs modèles de régression logistique ont été également analysés. Résultats: Parmi 598 femmes, on a observé 20 pré-éclampsies (3,3%), 7 pré-éclampsies précoces (1,2%), 52 cas d’hypertension gestationnelle (8,7%) , 10 cas d’hypertension gestationnelle avant 37 semaines (1,7%). L’index de pulsatilité des artères utérines au 2e trimestre est le meilleur prédicteur. En analyse de régression logistique multivariée, la meilleure valeur prédictive au 1er et au 2e trimestre a été obtenue pour la prévision de la pré-éclampsie précoce. Le dépistage combiné a montré des résultats nettement meilleurs comparés avec les paramètres maternels ou Doppler seuls. Conclusion: Comme seul marqueur, le Doppler utérin du deuxième trimestre a la meilleure prédictive pour l'hypertension, la naissance prématurée et la restriction de croissance. La combinaison des caractéristiques démographiques maternelles, des biomarqueurs sériques maternels et du Doppler utérin améliore l'efficacité du dépistage, en particulier pour la pré-éclampsie nécessitant un accouchement prématuré.