842 resultados para data movement problem
Resumo:
The atmosphere is a global influence on the movement of heat and humidity between the continents, and thus significantly affects climate variability. Information about atmospheric circulation are of major importance for the understanding of different climatic conditions. Dust deposits from maar lakes and dry maars from the Eifel Volcanic Field (Germany) are therefore used as proxy data for the reconstruction of past aeolian dynamics.rnrnIn this thesis past two sediment cores from the Eifel region are examined: the core SM3 from Lake Schalkenmehren and the core DE3 from the Dehner dry maar. Both cores contain the tephra of the Laacher See eruption, which is dated to 12,900 before present. Taken together the cores cover the last 60,000 years: SM3 the Holocene and DE3 the marine isotope stages MIS-3 and MIS-2, respectively. The frequencies of glacial dust storm events and their paleo wind direction are detected by high resolution grain size and provenance analysis of the lake sediments. Therefore two different methods are applied: geochemical measurements of the sediment using µXRF-scanning and the particle analysis method RADIUS (rapid particle analysis of digital images by ultra-high-resolution scanning of thin sections).rnIt is shown that single dust layers in the lake sediment are characterized by an increased content of aeolian transported carbonate particles. The limestone-bearing Eifel-North-South zone is the most likely source for the carbonate rich aeolian dust in the lake sediments of the Dehner dry maar. The dry maar is located on the western side of the Eifel-North-South zone. Thus, carbonate rich aeolian sediment is most likely to be transported towards the Dehner dry maar within easterly winds. A methodology is developed which limits the detection to the aeolian transported carbonate particles in the sediment, the RADIUS-carbonate module.rnrnIn summary, during the marine isotope stage MIS-3 the storm frequency and the east wind frequency are both increased in comparison to MIS-2. These results leads to the suggestion that atmospheric circulation was affected by more turbulent conditions during MIS-3 in comparison to the more stable atmospheric circulation during the full glacial conditions of MIS-2.rnThe results of the investigations of the dust records are finally evaluated in relation a study of atmospheric general circulation models for a comprehensive interpretation. Here, AGCM experiments (ECHAM3 and ECHAM4) with different prescribed SST patterns are used to develop a synoptic interpretation of long-persisting east wind conditions and of east wind storm events, which are suggested to lead to an enhanced accumulation of sediment being transported by easterly winds to the proxy site of the Dehner dry maar.rnrnThe basic observations made on the proxy record are also illustrated in the 10 m-wind vectors in the different model experiments under glacial conditions with different prescribed sea surface temperature patterns. Furthermore, the analysis of long-persisting east wind conditions in the AGCM data shows a stronger seasonality under glacial conditions: all the different experiments are characterized by an increase of the relative importance of the LEWIC during spring and summer. The different glacial experiments consistently show a shift from a long-lasting high over the Baltic Sea towards the NW, directly above the Scandinavian Ice Sheet, together with contemporary enhanced westerly circulation over the North Atlantic.rnrnThis thesis is a comprehensive analysis of atmospheric circulation patterns during the last glacial period. It has been possible to reconstruct important elements of the glacial paleo climate in Central Europe. While the proxy data from sediment cores lead to a binary signal of the wind direction changes (east versus west wind), a synoptic interpretation using atmospheric circulation models is successful. This shows a possible distribution of high and low pressure areas and thus the direction and strength of wind fields which have the capacity to transport dust. In conclusion, the combination of numerical models, to enhance understanding of processes in the climate system, with proxy data from the environmental record is the key to a comprehensive approach to paleo climatic reconstruction.rn
Resumo:
In many application domains data can be naturally represented as graphs. When the application of analytical solutions for a given problem is unfeasible, machine learning techniques could be a viable way to solve the problem. Classical machine learning techniques are defined for data represented in a vectorial form. Recently some of them have been extended to deal directly with structured data. Among those techniques, kernel methods have shown promising results both from the computational complexity and the predictive performance point of view. Kernel methods allow to avoid an explicit mapping in a vectorial form relying on kernel functions, which informally are functions calculating a similarity measure between two entities. However, the definition of good kernels for graphs is a challenging problem because of the difficulty to find a good tradeoff between computational complexity and expressiveness. Another problem we face is learning on data streams, where a potentially unbounded sequence of data is generated by some sources. There are three main contributions in this thesis. The first contribution is the definition of a new family of kernels for graphs based on Directed Acyclic Graphs (DAGs). We analyzed two kernels from this family, achieving state-of-the-art results from both the computational and the classification point of view on real-world datasets. The second contribution consists in making the application of learning algorithms for streams of graphs feasible. Moreover,we defined a principled way for the memory management. The third contribution is the application of machine learning techniques for structured data to non-coding RNA function prediction. In this setting, the secondary structure is thought to carry relevant information. However, existing methods considering the secondary structure have prohibitively high computational complexity. We propose to apply kernel methods on this domain, obtaining state-of-the-art results.
Resumo:
rnThis thesis is on the flavor problem of Randall Sundrum modelsrnand their strongly coupled dual theories. These models are particularly wellrnmotivated extensions of the Standard Model, because they simultaneously address rntherngauge hierarchy problem and the hierarchies in the quarkrnmasses and mixings. In order to put this into context, special attention is given to concepts underlying therntheories which can explain the hierarchy problem and the flavor structure of the Standard Model (SM). ThernAdS/CFTrnduality is introduced and its implications for the Randall Sundrum model withrnfermions in the bulk andrngeneral bulk gauge groups is investigated. It will be shown that the differentrnterms in the general 5D propagator of a bulk gauge field can be related tornthe corresponding diagrams of the strongly coupled dual, which allows for arndeeperrnunderstanding of the origin of flavor changing neutral currents generated by thernexchange of the Kaluza Klein excitations of these bulk fields.rnIn the numerical analysis, different observables which are sensitive torncorrections from therntree-levelrnexchange of these resonances will be presented on the basis of updatedrnexperimental data from the Tevatron and LHC experiments. This includesrnelectroweak precision observables, namely corrections to the S and Trnparameters followed by corrections to the Zbb vertex, flavor changingrnobservables with flavor changes at one vertex, viz. BR (Bd -> mu+mu-) and BR (Bs -> mu+mu-), and two vertices,rn viz. S_psiphi and |eps_K|, as well as bounds from direct detectionrnexperiments. rnThe analysis will show that all of these bounds can be brought in agreement withrna new physics scale Lambda_NP in the TeV range, except for the CPrnviolating quantity |eps_K|, which requires Lambda_NP= Ord(10) TeVrnin the absencernof fine-tuning. The numerous modifications of the Randall Sundrum modelrnin the literature, which try to attenuate this bound are reviewed andrncategorized.rnrnSubsequently, a novel solution to this flavor problem, based on an extendedrncolor gauge group in the bulk and its thorough implementation inrnthe RS model, will be presented, as well as an analysis of the observablesrnmentioned above in the extended model. This solution is especially motivatedrnfromrnthe point of view of the strongly coupled dual theory and the implications forrnstrongly coupled models of new physics, which do not possess a holographic dual,rnare examined.rnFinally, the top quark plays a special role in models with a geometric explanation ofrnflavor hierarchies and the predictions in the Randall-Sundrum model with andrnwithout the proposed extension for the forward-backward asymmetryrnA_FB^trnin top pair production are computed.
Resumo:
When designing metaheuristic optimization methods, there is a trade-off between application range and effectiveness. For large real-world instances of combinatorial optimization problems out-of-the-box metaheuristics often fail, and optimization methods need to be adapted to the problem at hand. Knowledge about the structure of high-quality solutions can be exploited by introducing a so called bias into one of the components of the metaheuristic used. These problem-specific adaptations allow to increase search performance. This thesis analyzes the characteristics of high-quality solutions for three constrained spanning tree problems: the optimal communication spanning tree problem, the quadratic minimum spanning tree problem and the bounded diameter minimum spanning tree problem. Several relevant tree properties, that should be explored when analyzing a constrained spanning tree problem, are identified. Based on the gained insights on the structure of high-quality solutions, efficient and robust solution approaches are designed for each of the three problems. Experimental studies analyze the performance of the developed approaches compared to the current state-of-the-art.
Resumo:
This work is focused on the study of saltwater intrusion in coastal aquifers, and in particular on the realization of conceptual schemes to evaluate the risk associated with it. Saltwater intrusion depends on different natural and anthropic factors, both presenting a strong aleatory behaviour, that should be considered for an optimal management of the territory and water resources. Given the uncertainty of problem parameters, the risk associated with salinization needs to be cast in a probabilistic framework. On the basis of a widely adopted sharp interface formulation, key hydrogeological problem parameters are modeled as random variables, and global sensitivity analysis is used to determine their influence on the position of saltwater interface. The analyses presented in this work rely on an efficient model reduction technique, based on Polynomial Chaos Expansion, able to combine the best description of the model without great computational burden. When the assumptions of classical analytical models are not respected, and this occurs several times in the applications to real cases of study, as in the area analyzed in the present work, one can adopt data-driven techniques, based on the analysis of the data characterizing the system under study. It follows that a model can be defined on the basis of connections between the system state variables, with only a limited number of assumptions about the "physical" behaviour of the system.
Resumo:
Classic group recommender systems focus on providing suggestions for a fixed group of people. Our work tries to give an inside look at design- ing a new recommender system that is capable of making suggestions for a sequence of activities, dividing people in subgroups, in order to boost over- all group satisfaction. However, this idea increases problem complexity in more dimensions and creates great challenge to the algorithm’s performance. To understand the e↵ectiveness, due to the enhanced complexity and pre- cise problem solving, we implemented an experimental system from data collected from a variety of web services concerning the city of Paris. The sys- tem recommends activities to a group of users from two di↵erent approaches: Local Search and Constraint Programming. The general results show that the number of subgroups can significantly influence the Constraint Program- ming Approaches’s computational time and e�cacy. Generally, Local Search can find results much quicker than Constraint Programming. Over a lengthy period of time, Local Search performs better than Constraint Programming, with similar final results.
Resumo:
Over the past twenty years, new technologies have required an increasing use of mathematical models in order to understand better the structural behavior: finite element method is the one mostly used. However, the reliability of this method applied to different situations has to be tried each time. Since it is not possible to completely model the reality, different hypothesis must be done: these are the main problems of FE modeling. The following work deals with this problem and tries to figure out a way to identify some of the unknown main parameters of a structure. This main research focuses on a particular path of study and development, but the same concepts can be applied to other objects of research. The main purpose of this work is the identification of unknown boundary conditions of a bridge pier using the data acquired experimentally with field tests and a FEM modal updating process. This work doesn’t want to be new, neither innovative. A lot of work has been done during the past years on this main problem and many solutions have been shown and published. This thesis just want to rework some of the main aspects of the structural optimization process, using a real structure as fitting model.
Resumo:
An interdisciplinary European group of clinical experts in the field of movement disorders and experienced Botulinum toxin users has updated the consensus for the use of Botulinum toxin in the treatment of children with cerebral palsy (CP). A problem-orientated approach was used focussing on both published and practice-based evidence. In part I of the consensus the authors have tabulated the supporting evidence to produce a concise but comprehensive information base, pooling data and experience from 36 institutions in 9 European countries which involves more than 10,000 patients and over 45,000 treatment sessions during a period of more than 280 treatment years. In part II of the consensus the Gross Motor Function Measure (GMFM) and Gross Motor Function Classification System (GMFCS) based Motor Development Curves have been expanded to provide a graphical framework on how to treat the motor disorders in children with CP. This graph is named "CP(Graph) Treatment Modalities - Gross Motor Function" and is intended to facilitate communication between parents, therapists and medical doctors concerning (1) achievable motor function, (2) realistic goal-setting and (3) treatment perspectives for children with CP. The updated European consensus 2009 summarises the current understanding regarding an integrated, multidisciplinary treatment approach using Botulinum toxin for the treatment of children with CP.
Resumo:
This paper presents an automated solution for precise detection of fiducial screws from three-dimensional (3D) Computerized Tomography (CT)/Digital Volume Tomography (DVT) data for image-guided ENT surgery. Unlike previously published solutions, we regard the detection of the fiducial screws from the CT/DVT volume data as a pose estimation problem. We thus developed a model-based solution. Starting from a user-supplied initialization, our solution detects the fiducial screws by iteratively matching a computer aided design (CAD) model of the fiducial screw to features extracted from the CT/DVT data. We validated our solution on one conventional CT dataset and on five DVT volume datasets, resulting in a total detection of 24 fiducial screws. Our experimental results indicate that the proposed solution achieves much higher reproducibility and precision than the manual detection. Further comparison shows that the proposed solution produces better results on the DVT dataset than on the conventional CT dataset.
Resumo:
Java Enterprise Applications (JEAs) are complex software systems written using multiple technologies. Moreover they are usually distributed systems and use a database to deal with persistence. A particular problem that appears in the design of these systems is the lack of a rich business model. In this paper we propose a technique to support the recovery of such rich business objects starting from anemic Data Transfer Objects (DTOs). Exposing the code duplications in the application's elements using the DTOs we suggest which business logic can be moved into the DTOs from the other classes.
Resumo:
SETTING: Correctional settings and remand prisons. OBJECTIVE: To critically discuss calculations for epidemiological indicators of the tuberculosis (TB) burden in prisons and to provide recommendations to improve study comparability. METHODS: A hypothetical data set illustrates issues in determining incidence and prevalence. The appropriate calculation of the incidence rate is presented and problems arising from cross-sectional surveys are clarifi ed. RESULTS: Cases recognized during the fi rst 3 months should be classifi ed as prevalent at entry and excluded from any incidence rate calculation. The numerator for the incidence rate includes persons detected as having developed TB during a specifi ed period of time subsequent to the initial 3 months. The denominator is persontime at risk from 3 months onward to the end point (TB or end of the observation period). Preferably, entry time, exit time and event time are known for each inmate to determine person-time at risk. Failing that, an approximation consists of the sum of monthly head counts, excluding prevalent cases and those persons no longer at risk from both the numerator and the denominator. CONCLUSIONS: The varying durations of inmate incarceration in prisons pose challenges for quantifying the magnitude of the TB problem in the inmate population. Recommendations are made to measure incidence and prevalence.
Resumo:
This project intertwines philosophical and historico-literary themes, taking as its starting point the concept of tragic consciousness inherent in the epoch of classicism. The research work makes use of ontological categories in order to describe the underlying principles of the image of the world which was created in philosophical and scientific theories of the 17th century as well as in contemporary drama. Using these categories brought Mr. Vilk to the conclusion that the classical picture of the world implied a certain dualism; not the Manichaean division between light and darkness but the discrimination between nature and absolute being, i.e. God. Mr. Vilk begins with an examination of the philosophical essence of French classical theatre of the XVII and XVIII centuries. The history of French classical tragedy can be divided into three periods: from the mid 17th to early 19th centuries when it triumphed all over France and exerted a powerful influence over almost all European countries; followed by the period of its rejection by the Romantics, who declared classicism to be "artificial and rational"; and finally our own century which has taken a more moderate line. Nevertheless, French classical tragedy has never fully recovered its status. Instead, it is ancient tragedy and the works of Shakespeare that are regarded to be the most adequate embodiment of the tragic. Consequently they still provoke a great number of new interpretations ranging from specialised literary criticism to more philosophical rumination. An important feature of classical tragedy is a system of rules and unities which reveals a hidden ontological structure of the world. The ontological picture of the dramatic world can be described in categories worked out by medieval philosophy - being, essence and existence. The first category is to be understood as a tendency toward permanency and stability (within eternity) connected with this or that fragment of dramatic reality. The second implies a certain set of permanent elements that make up the reality. And the third - existence - should be understood as "an act of being", as a realisation of permanently renewed processes of life. All of these categories can be found in every artistic reality but the accents put on one or another and their interrelations create different ontological perspectives. Mr. Vilk plots the movement of thought, expressed in both philosophical and scientific discourses, away from Aristotle's essential forms, and towards a prioritising of existence, and shows how new forms of literature and drama structured the world according to these evolving requirements. At the same time the world created in classical tragedy fully preserves another ontological paradigm - being - as a fundamental permanence. As far as the tragic hero's motivations are concerned this paradigm is revealed in the dedication of his whole self to some cause, and his oath of fidelity, attitudes which shape his behaviour. It may be the idea of the State, or personal honour, or something borrowed from the emotional sphere, passionate love. Mr. Vilk views the conflicting ambivalence of existence and being, duty as responsibility and duty as fidelity, as underlying the main conflict of classical tragedy of the 17th century. Having plotted the movement of the being/existence duality through its manifestations in 17th century tragedy, Mr. Vilk moves to the 18th century, when tragedy took a philosophical turn. A dualistic view of the world became supplanted by the Enlightenment idea of a natural law, rooted in nature. The main point of tragedy now was to reveal that such conflicts as might take place had an anti-rational nature, that they arose as the result of a kind of superstition caused by social reasons. These themes Mr. Vilk now pursues through Russian dramatists of the 18th and early 19th centuries. He begins with Sumarakov, whose philosophical thought has a religious bias. According to Sumarakov, the dualism of the divineness and naturalness of man is on the one hand an eternal paradox, and on the other, a moral challenge for humans to try to unite the two opposites. His early tragedies are not concerned with social evils or the triumph of natural feelings and human reason, but rather the tragic disharmony in the nature of man and the world. Mr Vilk turns next to the work of Kniazhnin. He is particularly keen to rescue his reputation from the judgements of critics who accuse him of being imitative, and in order to do so, analyses in detail the tragedy "Dido", in which Kniazhnin makes an attempt to revive the image of great heroes and city-founders. Aeneas represents the idea of the "being" of Troy, his destiny is the re-establishment of the city (the future Rome). The moral aspect behind this idea is faithfulness, he devotes himself to Gods. Dido is also the creator of a city, endowed with "natural powers" and abilities, but her creation is lacking internal stability grounded in "being". The unity of the two motives is only achieved through Dido's sacrifice of herself and her city to Aeneus. Mr Vilk's next subject is Kheraskov, whose peculiarity lies in the influence of free-mason mysticism on his work. This section deals with one of the most important philosophical assumptions contained in contemporary free-mason literature of the time - the idea of the trinitarian hierarchy inherent in man and the world: body - soul - spirit, and nature - law - grace. Finally, Mr. Vilk assess the work of Ozerov, the last major Russian tragedian. The tragedies which earned him fame, "Oedipus in Athens", "Fingal" and "Dmitri Donskoi", present a compromise between the Enlightenment's emphasis on harmony and ontological tragic conflict. But it is in "Polixene" that a real meeting of the Russian tradition with the age-old history of the genre takes place. The male and female characters of "Polixene" distinctly express the elements of "being" and "existence". Each of the participants of the conflict possesses some dominant characteristic personifying a certain indispensable part of the moral world, a certain "virtue". But their independent efforts are unable to overcome the ontological gap separating them. The end of the tragedy - Polixene's sacrificial self-immolation - paradoxically combines the glorification of each party involved in the conflict, and their condemnation. The final part of Mr. Vilk's research deals with the influence of "Polixene" upon subsequent dramatic art. In this respect Katenin's "Andromacha", inspired by "Polixene", is important to mention. In "Andromacha" a decisive divergence from the principles of the philosophical tragedy of Russian classicism and the ontology of classicism occurs: a new character appears as an independent personality, directed by his private interest. It was Katenin who was to become the intermediary between Pushkin and classical tragedy.
Resumo:
The occurrence of degenerative spinal disease subsequent to dystonic movement disorders has been neglected and has received more attention only recently. Spinal surgery is challenging with regard to continuous mechanical stress when treatment of the underlying movement disorder is insufficient. To characterize better the particular features of degenerative spinal disease in patients with dystonia and to analyze operative strategies, we reviewed the available published data. Epidemiologic studies reveal that degenerative spinal disorders in patients with dystonia and choreoathetosis occur much earlier than in the physiological aging process. Dystonic movement disorders more often affect the spine at higher cervical levels (C(2-5)), in contrast to spinal degeneration with age which manifests more frequently at the middle and lower cervical spine (C(5-7)). Degenerative changes of the cervical spine are more likely to occur on the side where the chin is rotated or tilted to. Various operative approaches for treatment of spinal pathologies have been advocated in patients with dystonic movement disorders. The available data do not allow making firm statements regarding the superiority of one approach over the other. Posterior approaches were first used for decompression, but additional anterior fusion became necessary in many instances. Anterior approaches with or without instrumented fusion yielded more favorable results, but drawbacks are pseudarthrosis and adjacent-level disease. Parallel to the development of posterior fusion techniques, circumferential surgery was suggested to provide a maximum degree of cord decompression and a higher fusion rate. Perioperative local injections of botulinum toxin were used initially to enhance patient comfort with halo immobilization, but they are also applied in patients without external fixation nowadays. Treatment algorithms directed at the underlying movement disorder itself, taking advantage of new techniques of functional neurosurgery, combined with spinal surgery have recently been introduced and show promising results.
Resumo:
With recent advances in mass spectrometry techniques, it is now possible to investigate proteins over a wide range of molecular weights in small biological specimens. This advance has generated data-analytic challenges in proteomics, similar to those created by microarray technologies in genetics, namely, discovery of "signature" protein profiles specific to each pathologic state (e.g., normal vs. cancer) or differential profiles between experimental conditions (e.g., treated by a drug of interest vs. untreated) from high-dimensional data. We propose a data analytic strategy for discovering protein biomarkers based on such high-dimensional mass-spectrometry data. A real biomarker-discovery project on prostate cancer is taken as a concrete example throughout the paper: the project aims to identify proteins in serum that distinguish cancer, benign hyperplasia, and normal states of prostate using the Surface Enhanced Laser Desorption/Ionization (SELDI) technology, a recently developed mass spectrometry technique. Our data analytic strategy takes properties of the SELDI mass-spectrometer into account: the SELDI output of a specimen contains about 48,000 (x, y) points where x is the protein mass divided by the number of charges introduced by ionization and y is the protein intensity of the corresponding mass per charge value, x, in that specimen. Given high coefficients of variation and other characteristics of protein intensity measures (y values), we reduce the measures of protein intensities to a set of binary variables that indicate peaks in the y-axis direction in the nearest neighborhoods of each mass per charge point in the x-axis direction. We then account for a shifting (measurement error) problem of the x-axis in SELDI output. After these pre-analysis processing of data, we combine the binary predictors to generate classification rules for cancer, benign hyperplasia, and normal states of prostate. Our approach is to apply the boosting algorithm to select binary predictors and construct a summary classifier. We empirically evaluate sensitivity and specificity of the resulting summary classifiers with a test dataset that is independent from the training dataset used to construct the summary classifiers. The proposed method performed nearly perfectly in distinguishing cancer and benign hyperplasia from normal. In the classification of cancer vs. benign hyperplasia, however, an appreciable proportion of the benign specimens were classified incorrectly as cancer. We discuss practical issues associated with our proposed approach to the analysis of SELDI output and its application in cancer biomarker discovery.
Resumo:
Estimation for bivariate right censored data is a problem that has had much study over the past 15 years. In this paper we propose a new class of estimators for the bivariate survival function based on locally efficient estimation. We introduce the locally efficient estimator for bivariate right censored data, present an asymptotic theorem, present the results of simulation studies and perform a brief data analysis illustrating the use of the locally efficient estimator.