975 resultados para Log-log Method
Resumo:
OBJECTIVE: To test common genetic variants for association with seasonality (seasonal changes in mood and behavior) and to investigate whether there are shared genetic risk factors between psychiatric disorders and seasonality. METHOD: Genome-wide association studies (GWASs) were conducted in Australian (between 1988 and 1990 and between 2010 and 2013) and Amish (between May 2010 and December 2011) samples in whom the Seasonal Pattern Assessment Questionnaire (SPAQ) had been administered, and the results were meta-analyzed in a total sample of 4,156 individuals. Genetic risk scores based on results from prior large GWAS studies of bipolar disorder, major depressive disorder (MDD), and schizophrenia were calculated to test for overlap in risk between psychiatric disorders and seasonality. RESULTS: The most significant association was with rs11825064 (P = 1.7 × 10⁻⁶, β = 0.64, standard error = 0.13), an intergenic single nucleotide polymorphism (SNP) found on chromosome 11. The evidence for overlap in risk factors was strongest for schizophrenia and seasonality, with the schizophrenia genetic profile scores explaining 3% of the variance in log-transformed global seasonality scores. Bipolar disorder genetic profile scores were also associated with seasonality, although at much weaker levels (minimum P value = 3.4 × 10⁻³), and no evidence for overlap in risk was detected between MDD and seasonality. CONCLUSIONS: Common SNPs of large effect most likely do not exist for seasonality in the populations examined. As expected, there were overlapping genetic risk factors for bipolar disorder (but not MDD) with seasonality. Unexpectedly, the risk for schizophrenia and seasonality had the largest overlap, an unprecedented finding that requires replication in other populations and has potential clinical implications considering overlapping cognitive deficits in seasonal affective disorders and schizophrenia.
Resumo:
Tämän diplomityön tarkoituksena on tarkastella puukurottajan puomin väsymiskestävyyttä Hot-spot –menetelmää hyödyntäen. Työn väsymisanalyysi perustuu oletukseen, että työkiertojen jännityshistoria pysyy samanlaisena koko kestoiän. Työssä määritetään puomin kriittiset kohdat väsymiskestävyyden suhteen ja puomin väsymiseen vaikuttavia tekijöitä. Puomirakenteen FE-analyysin suorittamisessa käytettiin Femap NX Nastran –ohjelmistoa.
Resumo:
Os filmes são produzidos a partir de macromoléculas, que podem ser utilizados como embalagem, como os polissacarídeos, lipídeos e proteínas. As proteínas se destacam dos demais, pois possuem uma estrutura com 20 monômeros diferentes, que confere um amplo potencial de ligações intermoleculares. A incorporação de agentes ativos em filmes é uma alternativa como embalagem, para inibir ou retardar a multiplicação de microrganismos patógenos e deteriorantes em alimentos. O objetivo deste trabalho foi avaliar a atividade antimicrobiana de filmes à base de isolado protéico de anchoita (Engraulis anchoita) – IPA adicionados de ácidos orgânicos. Para tanto, foi elaborado o IPA, pela solubilização alcalina da proteína e precipitação no ponto isoelétrico a partir de carne mecanicamente separada. O IPA foi avaliado quanto a sua composição proximal, aminoacídica e por DSC. A solução formadora dos filmes foi elaborada a partir de IPA, água, glicerol e hidróxido de sódio. As formulações dos filmes foram elaboradas segundo um planejamento fatorial 23 . Foram avaliadas as propriedades físico-químicas de resistência a tração (RT) e elongação (E); espessura, solubilidade e permeabilidade ao vapor de água (PVA); a diferença de cor (∆E*) e opacidade (Y) e microscopia eletrônica de varredura (MEV) de filmes à base de IPA. Os filmes com diferentes concentrações de ácido sórbico (AS) ou ácido benzóico (AB) foram desenvolvidos a partir da condição cujo as propriedades físico-químicas foram as melhores, sendo comparados aos filmes controles. Estes, foram avaliados quanto a sua atividade antimicrobiana frente aos microrganismos Escherichia coli O157:H7, Listeria monocytogenes, Staphylococcus aureus e Salmonella Enteritidis pelo método de difusão em disco, além das propriedades físico-químicas, MEV e FT-IV. Os filmes com maior atividade antimicrobiana e os filmes controle foram aplicados sobre carne bovina, inoculados com os microrganismos inibidos no método de difusão em disco e armazenados a 5°C. Estes, foram avaliados a cada 2 dias durante 12 dias de armazenamento, pela método de contagem em gotas. O IPA apresentou 88,8% de proteína e 53,3% de aminoácidos polares e temperatura de desnaturação de 62,2°C. A espessura, PVA, ∆E* e Y dos filmes não foram afetados pelas variáveis estudadas no experimento. A menor solubilidade e maior RT dos filmes ocorreram em baixa concentração de IPA, glicerol e tratamento térmico, mas a E aumentou com o acréscimo dessas variáveis. As MEV das superfícies dos filmes foram homogêneas, para aqueles com leve tratamento térmico. O aumento da concentração de AS e AB na faixa de 0,50 a 1,50% resultou na diminuição da RT e aumento da E, solubilidade, ∆E* e Y. Houve mudança da organização molecular e interações intermoleculares entre as moléculas de IPA e AB testados pela avaliação do FT-IV. As MEV revelaram microporos em filmes com 1,50% de AS, o que resultou em filmes com menor homogeneidade. A maior atividade antimicrobiana foi verificada nos filmes com 1,50% de AS e AB frente a E. coli O157:H7, L. monocytogenes e S. Enteritidis. Estes filmes foram aplicados sobre carne bovina inoculada com E. coli O157:H7 e L. monocytogenes. Os filmes de AS frente a E. coli O157:H7 e L. monocytogenes apresentaram uma redução de 5 e 4 log UFC.g-1, respectivamente, em relação ao filme controle. O efeito do AB frente a estas bactérias, apresentou uma redução de 6 e 5 log UFC.g-1, ao final do 12° dia de armazenamento, respectivamente. Os filmes elaborados à base de IPA, adicionados de AS ou AB podem ser eficazes contra os patógenos alimentares testados.
Resumo:
Excess nutrient loads carried by streams and rivers are a great concern for environmental resource managers. In agricultural regions, excess loads are transported downstream to receiving water bodies, potentially causing algal blooms, which could lead to numerous ecological problems. To better understand nutrient load transport, and to develop appropriate water management plans, it is important to have accurate estimates of annual nutrient loads. This study used a Monte Carlo sub-sampling method and error-corrected statistical models to estimate annual nitrate-N loads from two watersheds in central Illinois. The performance of three load estimation methods (the seven-parameter log-linear model, the ratio estimator, and the flow-weighted averaging estimator) applied at one-, two-, four-, six-, and eight-week sampling frequencies were compared. Five error correction techniques; the existing composite method, and four new error correction techniques developed in this study; were applied to each combination of sampling frequency and load estimation method. On average, the most accurate error reduction technique, (proportional rectangular) resulted in 15% and 30% more accurate load estimates when compared to the most accurate uncorrected load estimation method (ratio estimator) for the two watersheds. Using error correction methods, it is possible to design more cost-effective monitoring plans by achieving the same load estimation accuracy with fewer observations. Finally, the optimum combinations of monitoring threshold and sampling frequency that minimizes the number of samples required to achieve specified levels of accuracy in load estimation were determined. For one- to three-weeks sampling frequencies, combined threshold/fixed-interval monitoring approaches produced the best outcomes, while fixed-interval-only approaches produced the most accurate results for four- to eight-weeks sampling frequencies.
Resumo:
This dissertation proposes statistical methods to formulate, estimate and apply complex transportation models. Two main problems are part of the analyses conducted and presented in this dissertation. The first method solves an econometric problem and is concerned with the joint estimation of models that contain both discrete and continuous decision variables. The use of ordered models along with a regression is proposed and their effectiveness is evaluated with respect to unordered models. Procedure to calculate and optimize the log-likelihood functions of both discrete-continuous approaches are derived, and difficulties associated with the estimation of unordered models explained. Numerical approximation methods based on the Genz algortithm are implemented in order to solve the multidimensional integral associated with the unordered modeling structure. The problems deriving from the lack of smoothness of the probit model around the maximum of the log-likelihood function, which makes the optimization and the calculation of standard deviations very difficult, are carefully analyzed. A methodology to perform out-of-sample validation in the context of a joint model is proposed. Comprehensive numerical experiments have been conducted on both simulated and real data. In particular, the discrete-continuous models are estimated and applied to vehicle ownership and use models on data extracted from the 2009 National Household Travel Survey. The second part of this work offers a comprehensive statistical analysis of free-flow speed distribution; the method is applied to data collected on a sample of roads in Italy. A linear mixed model that includes speed quantiles in its predictors is estimated. Results show that there is no road effect in the analysis of free-flow speeds, which is particularly important for model transferability. A very general framework to predict random effects with few observations and incomplete access to model covariates is formulated and applied to predict the distribution of free-flow speed quantiles. The speed distribution of most road sections is successfully predicted; jack-knife estimates are calculated and used to explain why some sections are poorly predicted. Eventually, this work contributes to the literature in transportation modeling by proposing econometric model formulations for discrete-continuous variables, more efficient methods for the calculation of multivariate normal probabilities, and random effects models for free-flow speed estimation that takes into account the survey design. All methods are rigorously validated on both real and simulated data.
Resumo:
Context. In February-March 2014, the MAGIC telescopes observed the high-frequency peaked BL Lac 1ES 1011+496 (z=0.212) in flaring state at very-high energy (VHE, E>100GeV). The flux reached a level more than 10 times higher than any previously recorded flaring state of the source. Aims. Description of the characteristics of the flare presenting the light curve and the spectral parameters of the night-wise spectra and the average spectrum of the whole period. From these data we aim at detecting the imprint of the Extragalactic Background Light (EBL) in the VHE spectrum of the source, in order to constrain its intensity in the optical band. Methods. We analyzed the gamma-ray data from the MAGIC telescopes using the standard MAGIC software for the production of the light curve and the spectra. For the constraining of the EBL we implement the method developed by the H.E.S.S. collaboration in which the intrinsic energy spectrum of the source is modeled with a simple function (< 4 parameters), and the EBL-induced optical depth is calculated using a template EBL model. The likelihood of the observed spectrum is then maximized, including a normalization factor for the EBL opacity among the free parameters. Results. The collected data allowed us to describe the flux changes night by night and also to produce di_erential energy spectra for all nights of the observed period. The estimated intrinsic spectra of all the nights could be fitted by power-law functions. Evaluating the changes in the fit parameters we conclude that the spectral shape for most of the nights were compatible, regardless of the flux level, which enabled us to produce an average spectrum from which the EBL imprint could be constrained. The likelihood ratio test shows that the model with an EBL density 1:07 (-0.20,+0.24)stat+sys, relative to the one in the tested EBL template (Domínguez et al. 2011), is preferred at the 4:6 σ level to the no-EBL hypothesis, with the assumption that the intrinsic source spectrum can be modeled as a log-parabola. This would translate into a constraint of the EBL density in the wavelength range [0.24 μm,4.25 μm], with a peak value at 1.4 μm of λF_ = 12:27^(+2:75)_ (-2:29) nW m^(-2) sr^(-1), including systematics.
Resumo:
The long-term adverse effects on health associated with air pollution exposure can be estimated using either cohort or spatio-temporal ecological designs. In a cohort study, the health status of a cohort of people are assessed periodically over a number of years, and then related to estimated ambient pollution concentrations in the cities in which they live. However, such cohort studies are expensive and time consuming to implement, due to the long-term follow up required for the cohort. Therefore, spatio-temporal ecological studies are also being used to estimate the long-term health effects of air pollution as they are easy to implement due to the routine availability of the required data. Spatio-temporal ecological studies estimate the health impact of air pollution by utilising geographical and temporal contrasts in air pollution and disease risk across $n$ contiguous small-areas, such as census tracts or electoral wards, for multiple time periods. The disease data are counts of the numbers of disease cases occurring in each areal unit and time period, and thus Poisson log-linear models are typically used for the analysis. The linear predictor includes pollutant concentrations and known confounders such as socio-economic deprivation. However, as the disease data typically contain residual spatial or spatio-temporal autocorrelation after the covariate effects have been accounted for, these known covariates are augmented by a set of random effects. One key problem in these studies is estimating spatially representative pollution concentrations in each areal which are typically estimated by applying Kriging to data from a sparse monitoring network, or by computing averages over modelled concentrations (grid level) from an atmospheric dispersion model. The aim of this thesis is to investigate the health effects of long-term exposure to Nitrogen Dioxide (NO2) and Particular matter (PM10) in mainland Scotland, UK. In order to have an initial impression about the air pollution health effects in mainland Scotland, chapter 3 presents a standard epidemiological study using a benchmark method. The remaining main chapters (4, 5, 6) cover the main methodological focus in this thesis which has been threefold: (i) how to better estimate pollution by developing a multivariate spatio-temporal fusion model that relates monitored and modelled pollution data over space, time and pollutant; (ii) how to simultaneously estimate the joint effects of multiple pollutants; and (iii) how to allow for the uncertainty in the estimated pollution concentrations when estimating their health effects. Specifically, chapters 4 and 5 are developed to achieve (i), while chapter 6 focuses on (ii) and (iii). In chapter 4, I propose an integrated model for estimating the long-term health effects of NO2, that fuses modelled and measured pollution data to provide improved predictions of areal level pollution concentrations and hence health effects. The air pollution fusion model proposed is a Bayesian space-time linear regression model for relating the measured concentrations to the modelled concentrations for a single pollutant, whilst allowing for additional covariate information such as site type (e.g. roadside, rural, etc) and temperature. However, it is known that some pollutants might be correlated because they may be generated by common processes or be driven by similar factors such as meteorology. The correlation between pollutants can help to predict one pollutant by borrowing strength from the others. Therefore, in chapter 5, I propose a multi-pollutant model which is a multivariate spatio-temporal fusion model that extends the single pollutant model in chapter 4, which relates monitored and modelled pollution data over space, time and pollutant to predict pollution across mainland Scotland. Considering that we are exposed to multiple pollutants simultaneously because the air we breathe contains a complex mixture of particle and gas phase pollutants, the health effects of exposure to multiple pollutants have been investigated in chapter 6. Therefore, this is a natural extension to the single pollutant health effects in chapter 4. Given NO2 and PM10 are highly correlated (multicollinearity issue) in my data, I first propose a temporally-varying linear model to regress one pollutant (e.g. NO2) against another (e.g. PM10) and then use the residuals in the disease model as well as PM10, thus investigating the health effects of exposure to both pollutants simultaneously. Another issue considered in chapter 6 is to allow for the uncertainty in the estimated pollution concentrations when estimating their health effects. There are in total four approaches being developed to adjust the exposure uncertainty. Finally, chapter 7 summarises the work contained within this thesis and discusses the implications for future research.
Resumo:
Sequences of timestamped events are currently being generated across nearly every domain of data analytics, from e-commerce web logging to electronic health records used by doctors and medical researchers. Every day, this data type is reviewed by humans who apply statistical tests, hoping to learn everything they can about how these processes work, why they break, and how they can be improved upon. To further uncover how these processes work the way they do, researchers often compare two groups, or cohorts, of event sequences to find the differences and similarities between outcomes and processes. With temporal event sequence data, this task is complex because of the variety of ways single events and sequences of events can differ between the two cohorts of records: the structure of the event sequences (e.g., event order, co-occurring events, or frequencies of events), the attributes about the events and records (e.g., gender of a patient), or metrics about the timestamps themselves (e.g., duration of an event). Running statistical tests to cover all these cases and determining which results are significant becomes cumbersome. Current visual analytics tools for comparing groups of event sequences emphasize a purely statistical or purely visual approach for comparison. Visual analytics tools leverage humans' ability to easily see patterns and anomalies that they were not expecting, but is limited by uncertainty in findings. Statistical tools emphasize finding significant differences in the data, but often requires researchers have a concrete question and doesn't facilitate more general exploration of the data. Combining visual analytics tools with statistical methods leverages the benefits of both approaches for quicker and easier insight discovery. Integrating statistics into a visualization tool presents many challenges on the frontend (e.g., displaying the results of many different metrics concisely) and in the backend (e.g., scalability challenges with running various metrics on multi-dimensional data at once). I begin by exploring the problem of comparing cohorts of event sequences and understanding the questions that analysts commonly ask in this task. From there, I demonstrate that combining automated statistics with an interactive user interface amplifies the benefits of both types of tools, thereby enabling analysts to conduct quicker and easier data exploration, hypothesis generation, and insight discovery. The direct contributions of this dissertation are: (1) a taxonomy of metrics for comparing cohorts of temporal event sequences, (2) a statistical framework for exploratory data analysis with a method I refer to as high-volume hypothesis testing (HVHT), (3) a family of visualizations and guidelines for interaction techniques that are useful for understanding and parsing the results, and (4) a user study, five long-term case studies, and five short-term case studies which demonstrate the utility and impact of these methods in various domains: four in the medical domain, one in web log analysis, two in education, and one each in social networks, sports analytics, and security. My dissertation contributes an understanding of how cohorts of temporal event sequences are commonly compared and the difficulties associated with applying and parsing the results of these metrics. It also contributes a set of visualizations, algorithms, and design guidelines for balancing automated statistics with user-driven analysis to guide users to significant, distinguishing features between cohorts. This work opens avenues for future research in comparing two or more groups of temporal event sequences, opening traditional machine learning and data mining techniques to user interaction, and extending the principles found in this dissertation to data types beyond temporal event sequences.
Resumo:
Purpose: To compare oral bioavailability and pharmacokinetic parameters of different lornoxicam formulations and to assess similarity in plasma level profiles by statistical techniques. Methods: An open-label, two-period crossover trial was followed in 24 healthy Pakistani volunteers (22 males, 2 females). Each participant received a single dose of lornoxicam controlled release (CR) microparticles and two doses (morning and evening) of conventional lornoxicam immediate release (IR) tablet formulation. The microparticles were prepared by spray drying method. The formulations were administered again in an alternate manner after a washout period of one week. Pharmacokinetic parameters were determined by Kinetica 4.0 software using plasma concentration-time data. Moreover, data were statistically analyzed at 90 % confidence interval (CI) and Schuirmann’s two one-sided t-test procedure. Results: Peak plasma concentration (Cmax) was 20.2 % lower for CR formulation compared to IR formulation (270.90 ng/ml vs 339.44 ng/ml, respectively) while time taken to attain Cmax (tmax) was 5.25 and 2.08 h, respectively. Area under the plasma drug level versus time (AUC) curve was comparable for both CR and IR formulations. The 90 % confidence interval (CI) values computed for Cmax, AUC0-24, and AUC0-∞ , after log transformation, were 87.21, 108.51 and 102.74 %, respectively, and were within predefined bioequivalence range (80 - 125 %). Conclusion: The findings suggest that CR formulation of lornoxicam did not change the overall pharmacokinetic properties of lornoxicam in terms of extent and rate of lornoxicam absorption.
Resumo:
Dissertação (mestrado)—Universidade de Brasília, Departamento de Administração, Programa de Pós-graduação em Administração, 2016.
Resumo:
Dust attenuation affects nearly all observational aspects of galaxy evolution, yet very little is known about the form of the dust-attenuation law in the distant universe. Here, we model the spectral energy distributions of galaxies at z ~ 1.5–3 from CANDELS with rest-frame UV to near-IR imaging under different assumptions about the dust law, and compare the amount of inferred attenuated light with the observed infrared (IR) luminosities. Some individual galaxies show strong Bayesian evidence in preference of one dust law over another, and this preference agrees with their observed location on the plane of infrared excess (IRX, L_TIR/L_UV) and UV slope (β). We generalize the shape of the dust law with an empirical model, A_ λ,σ =E(B-V)k_ λ (λ / λ v)^ σ where k_λ is the dust law of Calzetti et al., and show that there exists a correlation between the color excess E(B-V) and tilt δ with δ =(0.62±0.05)log(E(B-V))+(0.26±0.02). Galaxies with high color excess have a shallower, starburst-like law, and those with low color excess have a steeper, SMC-like law. Surprisingly, the galaxies in our sample show no correlation between the shape of the dust law and stellar mass, star formation rate, or β. The change in the dust law with color excess is consistent with a model where attenuation is caused by scattering, a mixed star–dust geometry, and/or trends with stellar population age, metallicity, and dust grain size. This rest-frame UV-to-near-IR method shows potential to constrain the dust law at even higher redshifts (z>3).
Resumo:
Trammel net size selectivity was studied for the most important metiers in four southern European areas: the Cantabrian Sea (Atlantic, Basque Country, Spain), the Algarve (Atlantic, southern Portugal), the Gulf of Cadiz (Atlantic, Spain) and the Cyclades Islands (Mediterranean, Aegean Sea, Greece). These metiers were: cuttlefish (Sepia officinalis) and soles (Solea senegalensis, Microchirus azevia, Synaptura lusitanica) in the Algarve and the Gulf of Cadiz, sole (Solea solea) in the Cantabrian Sea and mixed fin-fish in the Cyclades. In each area, experimental trammel nets of six different types (combinations of two large outer panel mesh sizes and three small inner panel meshes) were constructed. Fishing trials were carried out on a seasonal basis (four seasons in the Cantabrian Sea, Algarve and Cyclades and two seasons in the Gulf of Cadiz) with chartered commercial fishing vessels. Overall, size selectivity was estimated for 17 out of 28 species for which sufficient data were available. Trammel nets generally caught a wide size range of the most important species, with length frequency distributions that were skewed to the right and/or bi-modal. In many cases the length frequency distributions of the different nets were highly overlapped. The Kolmogorov-Smirnov test also showed that the large outer panel meshes generally had no effect in terms of size selectivity, while the opposite was true for the small inner panel ones. Six different selectivity models (normal scale, normal location, gamma, log-normal, bi-modal and gamma semi-Wileman) were fitted to data for the most abundant species in the four areas. For fish, the bi-modal model provided the best fits for the majority of the data sets, with the uni-modal models giving poor fits in most cases. For Sepia officinalis, where trammelling or pocketing was the method of capture in 100% of the cases, the logistic model fitted by maximum likelihood was judged to be more appropriate for describing the size selective properties of the trammel nets. Our results, which are among the first ones on trammel net selectivity in European waters, will be useful for evaluating the impacts of competing gear for the socio-economically important small-scale static gear fisheries. (c) 2006 Elsevier B.V All rights reserved.
Resumo:
Pesticides applications have been described by many researches as a very inefficient process. In some cases, there are reports that only 0.02% of the applied products are used for the effective control of the problem. The main factor that influences pesticides applications is the droplet size formed on spraying nozzles. Many parameters affects the dynamic of the droplets, like wind, temperature, relative humidity, and others. Small droplets are biologically more active, but they are affected by evaporation and drift. On the other hand, the great droplets do not promote a good distribution of the product on the target. In this sense, associated with the risk of non target areas contamination and with the high costs involved in applications, the knowledge of the droplet size is of fundamental importance in the application technology. When sophisticated technology for droplets analysis is unavailable, is common the use of artificial targets like water-sensitive paper to sample droplets. On field sampling, water-sensitive papers are placed on the trials where product will be applied. When droplets impinging on it, the yellow surface of this paper will be stained dark blue, making easy their recognition. Collected droplets on this papers have different kinds of sizes. In this sense, the determination of the droplet size distribution gives a mass distribution of the material and so, the efficience of the application of the product. The stains produced by droplets shows a spread factor proportional to their respectives initial sizes. One of methodologies to analyse the droplets is a counting and measure of the droplets made in microscope. The Porton N-G12 graticule, that shows equaly spaces class intervals on geometric progression of square 2, are coulpled to the lens of the microscope. The droplet size parameters frequently used are the Volumetric Median Diameter (VMD) and the Numeric Median Diameter. On VMD value, a representative droplets sample is divided in two equal parts of volume, in such away one part contains droplets of sizes smaller than VMD and the other part contains droplets of sizes greater that VMD. The same process is done to obtaining the NMD, which divide the sample in two equal parts in relation to the droplets size. The ratio between VMD and NMD allows the droplets uniformity evaluation. After that, the graphics of accumulated probability of the volume and size droplets are plotted on log scale paper (accumulated probability versus median diameter of each size class). The graphics provides the NMD on the x-axes point corresponding to the value of 50% founded on the y-axes. All this process is very slow and subjected to operator error. So, in order to decrease the difficulty envolved with droplets measuring it was developed a numeric model, implemented on easy and accessfull computational language, which allows approximate VMD and NMD values, with good precision. The inputs to this model are the frequences of the droplets sizes colected on the water-sensitive paper, observed on the Porton N-G12 graticule fitted on microscope. With these data, the accumulated distribution of the droplet medium volumes and sizes are evaluated. The graphics obtained by plotting this distributions allow to obtain the VMD and NMD using linear interpolation, seen that on the middle of the distributions the shape of the curves are linear. These values are essential to evaluate the uniformity of droplets and to estimate the volume deposited on the observed paper by the density (droplets/cm2). This methodology to estimate the droplets volume was developed by 11.0.94.224 Project of the CNPMA/EMBRAPA. Observed data of herbicides aerial spraying samples, realized by Project on Pelotas/RS county, were used to compare values obtained manual graphic method and with those obtained by model has shown, with great precision, the values of VMD and NMD on each sampled collector, allowing to estimate a quantities of deposited product and, by consequence, the quantities losses by drifty. The graphics of variability of VMD and NMD showed that the quantity of droplets that reachs the collectors had a short dispersion, while the deposited volume shows a great interval of variation, probably because the strong action of air turbulence on the droplets distribution, enfasizing the necessity of a deeper study to verify this influences on drift.
Resumo:
Intelligent systems are currently inherent to the society, supporting a synergistic human-machine collaboration. Beyond economical and climate factors, energy consumption is strongly affected by the performance of computing systems. The quality of software functioning may invalidate any improvement attempt. In addition, data-driven machine learning algorithms are the basis for human-centered applications, being their interpretability one of the most important features of computational systems. Software maintenance is a critical discipline to support automatic and life-long system operation. As most software registers its inner events by means of logs, log analysis is an approach to keep system operation. Logs are characterized as Big data assembled in large-flow streams, being unstructured, heterogeneous, imprecise, and uncertain. This thesis addresses fuzzy and neuro-granular methods to provide maintenance solutions applied to anomaly detection (AD) and log parsing (LP), dealing with data uncertainty, identifying ideal time periods for detailed software analyses. LP provides deeper semantics interpretation of the anomalous occurrences. The solutions evolve over time and are general-purpose, being highly applicable, scalable, and maintainable. Granular classification models, namely, Fuzzy set-Based evolving Model (FBeM), evolving Granular Neural Network (eGNN), and evolving Gaussian Fuzzy Classifier (eGFC), are compared considering the AD problem. The evolving Log Parsing (eLP) method is proposed to approach the automatic parsing applied to system logs. All the methods perform recursive mechanisms to create, update, merge, and delete information granules according with the data behavior. For the first time in the evolving intelligent systems literature, the proposed method, eLP, is able to process streams of words and sentences. Essentially, regarding to AD accuracy, FBeM achieved (85.64+-3.69)%; eGNN reached (96.17+-0.78)%; eGFC obtained (92.48+-1.21)%; and eLP reached (96.05+-1.04)%. Besides being competitive, eLP particularly generates a log grammar, and presents a higher level of model interpretability.
Resumo:
L'utilità clinica dell’uso routinario delle tecniche di sequenziamento di nuova generazione (NGS) nei pazienti con cancro colorettale metastatico andrebbe approfondita. In questo studio, è stato valutato l'impatto di un pannello NGS da 52 geni utilizzato nella pratica clinica di routine. Abbiamo analizzato i risultati dei test molecolari multigenici in pazienti con carcinoma colorettale metastatico (mCRC) in uno studio osservazionale, retrospettivo e monocentrico su pazienti affetti da carcinoma colorettale metastatico consecutivamente testati presso un centro oncologico italiano tra giugno 2019 e dicembre 2020. Le analisi di sopravvivenza sono state effettuate con il metodo Kaplan-Meier, test log-rank e modello di Cox. Complessivamente sono stati inclusi 179 pazienti con mCRC. Il follow-up mediano è stato di 33 mesi (IQR: 28,45–NR). I quattro geni più frequentemente mutati sono stati: KRAS (48,6%), PIK3CA (22,4%), BRAF (14,5%) e APC (8,4%). È stata trovata un'associazione positiva tra la sopravvivenza globale (OS) e le mutazioni KRAS con un'alta frequenza allelica variante (VAF) [HR: 0,60 (0.36 – 0.99), P=0.047]. La mutazione BRAF era associata a OS inferiore [HR: 2,62 (1,59-4,32), P <0,001]. Il panello NGS ha consentito a otto pazienti di accedere a terapie a bersaglio molecolare non ancora registrate per il cancro colorettale. In conclusione, i pannelli NGS in mCRC sono fattibili nella pratica clinica in laboratori di riferimento per consentire un impatto inferiore sui costi e un aggiornamento regolare. La mutazione di BRAF risulta associata a una prognosi peggiore. Le mutazioni di KRAS con un’elevata variazione di frequenza allelica erano associate a una sopravvivenza globale superiore rispetto ai pazienti KRAS non mutati. Sono necessari studi più approfonditi per analizzare meglio i fattori prognostici.