924 resultados para 280401 Analysis of Algorithms and Complexity


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The goal of this study is the analysis of the dynamical properties of financial data series from worldwide stock market indexes during the period 2000–2009. We analyze, under a regional criterium, ten main indexes at a daily time horizon. The methods and algorithms that have been explored for the description of dynamical phenomena become an effective background in the analysis of economical data. We start by applying the classical concepts of signal analysis, fractional Fourier transform, and methods of fractional calculus. In a second phase we adopt the multidimensional scaling approach. Stock market indexes are examples of complex interacting systems for which a huge amount of data exists. Therefore, these indexes, viewed from a different perspectives, lead to new classification patterns.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Remote sensing - the acquisition of information about an object or phenomenon without making physical contact with the object - is applied in a multitude of different areas, ranging from agriculture, forestry, cartography, hydrology, geology, meteorology, aerial traffic control, among many others. Regarding agriculture, an example of application of this information is regarding crop detection, to monitor existing crops easily and help in the region’s strategic planning. In any of these areas, there is always an ongoing search for better methods that allow us to obtain better results. For over forty years, the Landsat program has utilized satellites to collect spectral information from Earth’s surface, creating a historical archive unmatched in quality, detail, coverage, and length. The most recent one was launched on February 11, 2013, having a number of improvements regarding its predecessors. This project aims to compare classification methods in Portugal’s Ribatejo region, specifically regarding crop detection. The state of the art algorithms will be used in this region and their performance will be analyzed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The monitoring data collected during tunnel excavation can be used in inverse analysis procedures in order to identify more realistic geomechanical parameters that can increase the knowledge about the interested formations. These more realistic parameters can be used in real time to adapt the project to the real structure in situ behaviour. However, monitoring plans are normally designed for safety assessment and not especially for the purpose of inverse analysis. In fact, there is a lack of knowledge about what types and quantity of measurements are needed to succeed in identifying the parameters of interest. Also, the optimisation algorithm chosen for the identification procedure may be important for this matter. In this work, this problem is addressed using a theoretical case with which a thorough parametric study was carried out using two optimisation algorithms based on different calculation paradigms, namely a conventional gradient-based algorithm and an evolution strategy algorithm. Calculations were carried for different sets of parameters to identify several combinations of types and amount of monitoring data. The results clearly show the high importance of the available monitoring data and the chosen algorithm for the success rate of the inverse analysis process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PhD thesis in Biomedical Engineering

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study the properties of the well known Replicator Dynamics when applied to a finitely repeated version of the Prisoners' Dilemma game. We characterize the behavior of such dynamics under strongly simplifying assumptions (i.e. only 3 strategies are available) and show that the basin of attraction of defection shrinks as the number of repetitions increases. After discussing the difficulties involved in trying to relax the 'strongly simplifying assumptions' above, we approach the same model by means of simulations based on genetic algorithms. The resulting simulations describe a behavior of the system very close to the one predicted by the replicator dynamics without imposing any of the assumptions of the analytical model. Our main conclusion is that analytical and computational models are good complements for research in social sciences. Indeed, while on the one hand computational models are extremely useful to extend the scope of the analysis to complex scenar

Relevância:

100.00% 100.00%

Publicador:

Resumo:

High-throughput technologies are now used to generate more than one type of data from the same biological samples. To properly integrate such data, we propose using co-modules, which describe coherent patterns across paired data sets, and conceive several modular methods for their identification. We first test these methods using in silico data, demonstrating that the integrative scheme of our Ping-Pong Algorithm uncovers drug-gene associations more accurately when considering noisy or complex data. Second, we provide an extensive comparative study using the gene-expression and drug-response data from the NCI-60 cell lines. Using information from the DrugBank and the Connectivity Map databases we show that the Ping-Pong Algorithm predicts drug-gene associations significantly better than other methods. Co-modules provide insights into possible mechanisms of action for a wide range of drugs and suggest new targets for therapy

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Despite the central role of quantitative PCR (qPCR) in the quantification of mRNA transcripts, most analyses of qPCR data are still delegated to the software that comes with the qPCR apparatus. This is especially true for the handling of the fluorescence baseline. This article shows that baseline estimation errors are directly reflected in the observed PCR efficiency values and are thus propagated exponentially in the estimated starting concentrations as well as 'fold-difference' results. Because of the unknown origin and kinetics of the baseline fluorescence, the fluorescence values monitored in the initial cycles of the PCR reaction cannot be used to estimate a useful baseline value. An algorithm that estimates the baseline by reconstructing the log-linear phase downward from the early plateau phase of the PCR reaction was developed and shown to lead to very reproducible PCR efficiency values. PCR efficiency values were determined per sample by fitting a regression line to a subset of data points in the log-linear phase. The variability, as well as the bias, in qPCR results was significantly reduced when the mean of these PCR efficiencies per amplicon was used in the calculation of an estimate of the starting concentration per sample.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The author studies the error and complexity of the discrete random walk Monte Carlo technique for radiosity, using both the shooting and gathering methods. The author shows that the shooting method exhibits a lower complexity than the gathering one, and under some constraints, it has a linear complexity. This is an improvement over a previous result that pointed to an O(n log n) complexity. The author gives and compares three unbiased estimators for each method, and obtains closed forms and bounds for their variances. The author also bounds the expected value of the mean square error (MSE). Some of the results obtained are also shown

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This letter presents a comparison between threeFourier-based motion compensation (MoCo) algorithms forairborne synthetic aperture radar (SAR) systems. These algorithmscircumvent the limitations of conventional MoCo, namelythe assumption of a reference height and the beam-center approximation.All these approaches rely on the inherent time–frequencyrelation in SAR systems but exploit it differently, with the consequentdifferences in accuracy and computational burden. Aftera brief overview of the three approaches, the performance ofeach algorithm is analyzed with respect to azimuthal topographyaccommodation, angle accommodation, and maximum frequencyof track deviations with which the algorithm can cope. Also, ananalysis on the computational complexity is presented. Quantitativeresults are shown using real data acquired by the ExperimentalSAR system of the German Aerospace Center (DLR).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The most frequently used method to demonstrate testosterone abuse is the determination of the testosterone and epitestosterone concentration ratio (T/E ratio) in urine. Nevertheless, it is known that factors other than testosterone administration may increase the T/E ratio. In the last years, the determination of the carbon isotope ratio has proven to be the most promising method to help discriminate between naturally elevated T/E ratios and those reflecting T use. In this paper, an excretion study following oral administration of 40 mg testosterone undecanoate initially and 13 h later is presented. Four testosterone metabolites (androsterone, etiocholanolone, 5 alpha-androstanediol, and 5 beta-androstanediol) together with an endogenous reference (5 beta-pregnanediol) were extracted from the urines and the delta(13)C/(12)C ratio of each compound was analyzed by gas chromatography-combustion-isotope ratio mass spectrometry. The results show similar maximum delta(13)C-value variations (parts per thousand difference of delta(13)C/(12)C ratio from the isotope ratio standard) for the T metabolites and concomitant changes of the T/E ratios after administration of the first and the second dose of T. Whereas the T/E ratios as well as the androsterone, etiocholanolone and 5 alpha-androstanediol delta(13)C-values returned to the baseline 15 h after the second T administration, a decrease of the 5 beta-androstanediol delta-values could be detected for over 40 h. This suggests that measurements of 5 beta-androstanediol delta-values allow the detection of a testosterone ingestion over a longer post-administration period than other T metabolites delta(13)C-values or than the usual T/E ratio approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: An association between alcohol consumption and injury is clearly established from volume of drinking, heavy episodic drinking (HED), and consumption before injury. Little is known, however, about how their interaction raises risk of injury and what combination of factors carries the highest risk. This study explores which of 11 specified groups of drinkers (a) are at high risk and (b) contribute most to alcohol-attributable injuries. METHODS: In all, 8,736 patients, of whom 5,077 were injured, admitted to the surgical ward of the emergency department of Lausanne University Hospital between January 1, 2003, and June 30, 2004, were screened for alcohol use. Eleven groups were constructed on the basis of usual patterns of intake and preattendance drinking. Odds ratios (ORs) comparing injured and noninjured were derived, and alcohol-attributable fractions of injuries were calculated from ORs and prevalence of exposure groups. RESULTS: Risk of injury increased with volume of drinking, HED, and preattendance drinking. For both sexes, the highest risk was associated with low intake, HED, and 4 (women), 5 (men), or more drinks before injury. At the same level of preattendance drinking, high-volume drinkers were at lower risk than low-volume drinkers. In women, the group of low-risk non-HED drinkers taking fewer than 4 drinks suffered 47.5% of the alcohol-attributable injuries in contrast to only 20.4% for men. Low-volume male drinkers with HED had more alcohol-attributable injuries than that of low-volume female drinkers with HED (46.9% vs 23.2%). CONCLUSIONS: Although all groups of drinkers are at increased risk of alcohol-related injury, those who usually drink little but on occasion heavily are at particular risk. The lower risk of chronic heavy drinkers may be due to higher tolerance of alcohol. Prevention should thus target heavy-drinking occasions. Low-volume drinking women without HED and with only little preattendance drinking experienced a high proportion of injuries; such women would be well advised to drink very little or to take other special precautions in risky circumstances.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Newborn screening (NBS) for Cystic Fibrosis (CF) has been introduced in many countries, but there is no ideal protocol suitable for all countries. This retrospective study was conducted to evaluate whether the planned two step CF NBS with immunoreactive trypsinogen (IRT) and 7 CFTR mutations would have detected all clinically diagnosed children with CF in Switzerland. METHODS: IRT was measured using AutoDELFIA Neonatal IRT-Kit in stored NBS cards. RESULTS: Between 2006 and 2009, 66 children with CF were reported, 4 of which were excluded for various reasons (born in another country, NBS at 6 months, no informed consent). 98% (61/62) had significantly higher IRT compared to matched control group. There was one false negative IRT result in an asymptomatic child with atypical CF (normal pancreatic function and sweat test). CONCLUSIONS: All children but one with atypical CF would have been detected with the planned two step protocol.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Radioactive soil-contamination mapping and risk assessment is a vital issue for decision makers. Traditional approaches for mapping the spatial concentration of radionuclides employ various regression-based models, which usually provide a single-value prediction realization accompanied (in some cases) by estimation error. Such approaches do not provide the capability for rigorous uncertainty quantification or probabilistic mapping. Machine learning is a recent and fast-developing approach based on learning patterns and information from data. Artificial neural networks for prediction mapping have been especially powerful in combination with spatial statistics. A data-driven approach provides the opportunity to integrate additional relevant information about spatial phenomena into a prediction model for more accurate spatial estimates and associated uncertainty. Machine-learning algorithms can also be used for a wider spectrum of problems than before: classification, probability density estimation, and so forth. Stochastic simulations are used to model spatial variability and uncertainty. Unlike regression models, they provide multiple realizations of a particular spatial pattern that allow uncertainty and risk quantification. This paper reviews the most recent methods of spatial data analysis, prediction, and risk mapping, based on machine learning and stochastic simulations in comparison with more traditional regression models. The radioactive fallout from the Chernobyl Nuclear Power Plant accident is used to illustrate the application of the models for prediction and classification problems. This fallout is a unique case study that provides the challenging task of analyzing huge amounts of data ('hard' direct measurements, as well as supplementary information and expert estimates) and solving particular decision-oriented problems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The reliable and objective assessment of chronic disease state has been and still is a very significant challenge in clinical medicine. An essential feature of human behavior related to the health status, the functional capacity, and the quality of life is the physical activity during daily life. A common way to assess physical activity is to measure the quantity of body movement. Since human activity is controlled by various factors both extrinsic and intrinsic to the body, quantitative parameters only provide a partial assessment and do not allow for a clear distinction between normal and abnormal activity. In this paper, we propose a methodology for the analysis of human activity pattern based on the definition of different physical activity time series with the appropriate analysis methods. The temporal pattern of postures, movements, and transitions between postures was quantified using fractal analysis and symbolic dynamics statistics. The derived nonlinear metrics were able to discriminate patterns of daily activity generated from healthy and chronic pain states.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aim Structure of the Thesis In the first article, I focus on the context in which the Homo Economicus was constructed - i.e., the conception of economic actors as fully rational, informed, egocentric, and profit-maximizing. I argue that the Homo Economicus theory was developed in a specific societal context with specific (partly tacit) values and norms. These norms have implicitly influenced the behavior of economic actors and have framed the interpretation of the Homo Economicus. Different factors however have weakened this implicit influence of the broader societal values and norms on economic actors. The result is an unbridled interpretation and application of the values and norms of the Homo Economicus in the business environment, and perhaps also in the broader society. In the second article, I show that the morality of many economic actors relies on isomorphism, i.e., the attempt to fit into the group by adopting the moral norms surrounding them. In consequence, if the norms prevailing in a specific group or context (such as a specific region or a specific industry) change, it can be expected that actors with an 'isomorphism morality' will also adapt their ethical thinking and their behavior -for the 'better' or for the 'worse'. The article further describes the process through which corporations could emancipate from the ethical norms prevailing in the broader society, and therefore develop an institution with specific norms and values. These norms mainly rely on mainstream business theories praising the economic actor's self-interest and neglecting moral reasoning. Moreover, because of isomorphism morality, many economic actors have changed their perception of ethics, and have abandoned the values prevailing in the broader society in order to adopt those of the economic theory. Finally, isomorphism morality also implies that these economic actors will change their morality again if the institutional context changes. The third article highlights the role and responsibility of business scholars in promoting a systematic reflection and self-critique of the business system and develops alternative models to fill the moral void of the business institution and its inherent legitimacy crisis. Indeed, the current business institution relies on assumptions such as scientific neutrality and specialization, which seem at least partly challenged by two factors. First, self-fulfilling prophecy provides scholars with an important (even if sometimes undesired) normative influence over practical life. Second, the increasing complexity of today's (socio-political) world and interactions between the different elements constituting our society question the strong specialization of science. For instance, economic theories are not unrelated to psychology or sociology, and economic actors influence socio-political structures and processes, e.g., through lobbying (Dobbs, 2006; Rondinelli, 2002), or through marketing which changes not only the way we consume, but more generally tries to instill a specific lifestyle (Cova, 2004; M. K. Hogg & Michell, 1996; McCracken, 1988; Muniz & O'Guinn, 2001). In consequence, business scholars are key actors in shaping both tomorrow's economic world and its broader context. A greater awareness of this influence might be a first step toward an increased feeling of civic responsibility and accountability for the models and theories developed or taught in business schools.