910 resultados para Multiple Baseline Design
Resumo:
OBJECTIVE: To determine risk of Down syndrome (DS) in multiple relative to singleton pregnancies, and compare prenatal diagnosis rates and pregnancy outcome.
DESIGN: Population-based prevalence study based on EUROCAT congenital anomaly registries.
SETTING: Eight European countries.
POPULATION: 14.8 million births 1990-2009; 2.89% multiple births.
METHODS: DS cases included livebirths, fetal deaths from 20 weeks, and terminations of pregnancy for fetal anomaly (TOPFA). Zygosity is inferred from like/unlike sex for birth denominators, and from concordance for DS cases.
MAIN OUTCOME MEASURES: Relative risk (RR) of DS per fetus/baby from multiple versus singleton pregnancies and per pregnancy in monozygotic/dizygotic versus singleton pregnancies. Proportion of prenatally diagnosed and pregnancy outcome.
STATISTICAL ANALYSIS: Poisson and logistic regression stratified for maternal age, country and time.
RESULTS: Overall, the adjusted (adj) RR of DS for fetus/babies from multiple versus singleton pregnancies was 0.58 (95% CI 0.53-0.62), similar for all maternal ages except for mothers over 44, for whom it was considerably lower. In 8.7% of twin pairs affected by DS, both co-twins were diagnosed with the condition. The adjRR of DS for monozygotic versus singleton pregnancies was 0.34 (95% CI 0.25-0.44) and for dizygotic versus singleton pregnancies 1.34 (95% CI 1.23-1.46). DS fetuses from multiple births were less likely to be prenatally diagnosed than singletons (adjOR 0.62 [95% CI 0.50-0.78]) and following diagnosis less likely to be TOPFA (adjOR 0.40 [95% CI 0.27-0.59]).
CONCLUSIONS: The risk of DS per fetus/baby is lower in multiple than singleton pregnancies. These estimates can be used for genetic counselling and prenatal screening.
Resumo:
We report the discovery, tracking, and detection circumstances for 85 trans-Neptunian objects (TNOs) from the first 42 deg2 of the Outer Solar System Origins Survey. This ongoing r-band solar system survey uses the 0.9 deg2 field of view MegaPrime camera on the 3.6 m Canada–France–Hawaii Telescope. Our orbital elements for these TNOs are precise to a fractional semimajor axis uncertainty <0.1%. We achieve this precision in just two oppositions, as compared to the normal three to five oppositions, via a dense observing cadence and innovative astrometric technique. These discoveries are free of ephemeris bias, a first for large trans-Neptunian surveys. We also provide the necessary information to enable models of TNO orbital distributions to be tested against our TNO sample. We confirm the existence of a cold "kernel" of objects within the main cold classical Kuiper Belt and infer the existence of an extension of the "stirred" cold classical Kuiper Belt to at least several au beyond the 2:1 mean motion resonance with Neptune. We find that the population model of Petit et al. remains a plausible representation of the Kuiper Belt. The full survey, to be completed in 2017, will provide an exquisitely characterized sample of important resonant TNO populations, ideal for testing models of giant planet migration during the early history of the solar system.
Resumo:
Collaboration in the public sector is imperative to achieve e-government objectives such as improved efficiency and effectiveness of public administration and improved quality of public services. Collaboration across organizational and institutional boundaries requires public organizations to share e-government systems and services through for instance, interoperable information technology and processes. Demands on public organizations to become more open also require that public organizations adopt new collaborative approaches for inviting and engaging citizens in governmental activities. E-government related collaboration in the public sector is challenging, however, and collaboration initiatives often fail. Public organizations need to learn how to collaborate since forms of e-government collaboration and expected outcomes are mostly unknown. How public organizations can collaborate and the expected outcomes are thus investigated in this thesis by studying multiple collaboration cases on the acquisition and implementation of a particular e-government investment (digital archive). This thesis also investigates how e-government collaboration can be facilitated through artifacts. It is done through a case study, where objects that cross boundaries between collaborating communities in the public sector are studied, and by designing a configurable process model integrating several processes for social services. By using design science, this thesis also investigates how an m-government solution that facilitates collaboration between citizens and public organizations can be designed. The thesis contributes to literature through describing five different modes of interorganizational collaboration in the public sector and the expected benefits from each mode. It also contributes with an instantiation of a configurable process model supporting three open social e-services and with evidence of how it can facilitate collaboration. This thesis further describes how boundary objects facilitate collaboration between different communities in an open government design initiative. It contributes with a designed mobile government solution, thereby providing proof of concept and initial design implications for enabling collaboration with citizens through citizen sourcing (outsourcing a governmental activity to citizens through an open call). This thesis also identifies research streams within e-government collaboration research through a literature review and the thesis contributions are related to the identified research streams. This thesis gives directions for future research by suggesting that future research should focus further on understanding e-government collaboration and how information and communication technology can facilitate collaboration in the public sector. It is suggested that further research should investigate m-government solutions to form design theories. Future research should also examine how value can be co-created in e-government collaboration.
Resumo:
Questo lavoro di tesi riguarda lo studio e l’implementazione di un algoritmo di multiple kernel learning (MKL) per la classificazione e la regressione di dati di neuroimaging ed, in particolare, di grafi di connettività funzionale. Gli algoritmi di MKL impiegano una somma pesata di vari kernel (ovvero misure di similarità) e permettono di selezionare le features utili alla discriminazione delle istanze durante l’addestramento del classificatore/regressore stesso. L’aspetto innovativo introdotto in questa tesi è stato lo studio di un nuovo kernel tra grafi di connettività funzionale, con la particolare caratteristica di conservare l’informazione relativa all’importanza di ogni singola region of interest (ROI) ed impiegando la norma lp come metodo per l’aggiornamento dei pesi, al fine di ottenere soluzioni sparsificate. L’algoritmo è stato validato utilizzando mappe di connettività sintetiche ed è stato applicato ad un dataset formato da 32 pazienti affetti da deterioramento cognitivo lieve e malattia dei piccoli vasi, di cui 16 sottoposti a riabilitazione cognitiva tra un’esame di risonanza ma- gnetica funzionale di baseline e uno di follow-up. Le mappe di con- nettività sono state ottenute con il toolbox CONN. Il classificatore è riuscito a discriminare i due gruppi di pazienti in una configurazione leave-one-out annidata con un’accuratezza dell’87.5%. Questo lavoro di tesi è stato svolto durante un periodo di ricerca presso la School of Computer Science and Electronic Engineering dell’University of Essex (Colchester, UK).
Resumo:
Variability management is one of the major challenges in software product line adoption, since it needs to be efficiently managed at various levels of the software product line development process (e.g., requirement analysis, design, implementation, etc.). One of the main challenges within variability management is the handling and effective visualization of large-scale (industry-size) models, which in many projects, can reach the order of thousands, along with the dependency relationships that exist among them. These have raised many concerns regarding the scalability of current variability management tools and techniques and their lack of industrial adoption. To address the scalability issues, this work employed a combination of quantitative and qualitative research methods to identify the reasons behind the limited scalability of existing variability management tools and techniques. In addition to producing a comprehensive catalogue of existing tools, the outcome form this stage helped understand the major limitations of existing tools. Based on the findings, a novel approach was created for managing variability that employed two main principles for supporting scalability. First, the separation-of-concerns principle was employed by creating multiple views of variability models to alleviate information overload. Second, hyperbolic trees were used to visualise models (compared to Euclidian space trees traditionally used). The result was an approach that can represent models encompassing hundreds of variability points and complex relationships. These concepts were demonstrated by implementing them in an existing variability management tool and using it to model a real-life product line with over a thousand variability points. Finally, in order to assess the work, an evaluation framework was designed based on various established usability assessment best practices and standards. The framework was then used with several case studies to benchmark the performance of this work against other existing tools.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
The goal of image retrieval and matching is to find and locate object instances in images from a large-scale image database. While visual features are abundant, how to combine them to improve performance by individual features remains a challenging task. In this work, we focus on leveraging multiple features for accurate and efficient image retrieval and matching. We first propose two graph-based approaches to rerank initially retrieved images for generic image retrieval. In the graph, vertices are images while edges are similarities between image pairs. Our first approach employs a mixture Markov model based on a random walk model on multiple graphs to fuse graphs. We introduce a probabilistic model to compute the importance of each feature for graph fusion under a naive Bayesian formulation, which requires statistics of similarities from a manually labeled dataset containing irrelevant images. To reduce human labeling, we further propose a fully unsupervised reranking algorithm based on a submodular objective function that can be efficiently optimized by greedy algorithm. By maximizing an information gain term over the graph, our submodular function favors a subset of database images that are similar to query images and resemble each other. The function also exploits the rank relationships of images from multiple ranked lists obtained by different features. We then study a more well-defined application, person re-identification, where the database contains labeled images of human bodies captured by multiple cameras. Re-identifications from multiple cameras are regarded as related tasks to exploit shared information. We apply a novel multi-task learning algorithm using both low level features and attributes. A low rank attribute embedding is joint learned within the multi-task learning formulation to embed original binary attributes to a continuous attribute space, where incorrect and incomplete attributes are rectified and recovered. To locate objects in images, we design an object detector based on object proposals and deep convolutional neural networks (CNN) in view of the emergence of deep networks. We improve a Fast RCNN framework and investigate two new strategies to detect objects accurately and efficiently: scale-dependent pooling (SDP) and cascaded rejection classifiers (CRC). The SDP improves detection accuracy by exploiting appropriate convolutional features depending on the scale of input object proposals. The CRC effectively utilizes convolutional features and greatly eliminates negative proposals in a cascaded manner, while maintaining a high recall for true objects. The two strategies together improve the detection accuracy and reduce the computational cost.
Resumo:
Considerable interest in renewable energy has increased in recent years due to the concerns raised over the environmental impact of conventional energy sources and their price volatility. In particular, wind power has enjoyed a dramatic global growth in installed capacity over the past few decades. Nowadays, the advancement of wind turbine industry represents a challenge for several engineering areas, including materials science, computer science, aerodynamics, analytical design and analysis methods, testing and monitoring, and power electronics. In particular, the technological improvement of wind turbines is currently tied to the use of advanced design methodologies, allowing the designers to develop new and more efficient design concepts. Integrating mathematical optimization techniques into the multidisciplinary design of wind turbines constitutes a promising way to enhance the profitability of these devices. In the literature, wind turbine design optimization is typically performed deterministically. Deterministic optimizations do not consider any degree of randomness affecting the inputs of the system under consideration, and result, therefore, in an unique set of outputs. However, given the stochastic nature of the wind and the uncertainties associated, for instance, with wind turbine operating conditions or geometric tolerances, deterministically optimized designs may be inefficient. Therefore, one of the ways to further improve the design of modern wind turbines is to take into account the aforementioned sources of uncertainty in the optimization process, achieving robust configurations with minimal performance sensitivity to factors causing variability. The research work presented in this thesis deals with the development of a novel integrated multidisciplinary design framework for the robust aeroservoelastic design optimization of multi-megawatt horizontal axis wind turbine (HAWT) rotors, accounting for the stochastic variability related to the input variables. The design system is based on a multidisciplinary analysis module integrating several simulations tools needed to characterize the aeroservoelastic behavior of wind turbines, and determine their economical performance by means of the levelized cost of energy (LCOE). The reported design framework is portable and modular in that any of its analysis modules can be replaced with counterparts of user-selected fidelity. The presented technology is applied to the design of a 5-MW HAWT rotor to be used at sites of wind power density class from 3 to 7, where the mean wind speed at 50 m above the ground ranges from 6.4 to 11.9 m/s. Assuming the mean wind speed to vary stochastically in such range, the rotor design is optimized by minimizing the mean and standard deviation of the LCOE. Airfoil shapes, spanwise distributions of blade chord and twist, internal structural layup and rotor speed are optimized concurrently, subject to an extensive set of structural and aeroelastic constraints. The effectiveness of the multidisciplinary and robust design framework is demonstrated by showing that the probabilistically designed turbine achieves more favorable probabilistic performance than those of the initial baseline turbine and a turbine designed deterministically.
Resumo:
A decision-maker, when faced with a limited and fixed budget to collect data in support of a multiple attribute selection decision, must decide how many samples to observe from each alternative and attribute. This allocation decision is of particular importance when the information gained leads to uncertain estimates of the attribute values as with sample data collected from observations such as measurements, experimental evaluations, or simulation runs. For example, when the U.S. Department of Homeland Security must decide upon a radiation detection system to acquire, a number of performance attributes are of interest and must be measured in order to characterize each of the considered systems. We identified and evaluated several approaches to incorporate the uncertainty in the attribute value estimates into a normative model for a multiple attribute selection decision. Assuming an additive multiple attribute value model, we demonstrated the idea of propagating the attribute value uncertainty and describing the decision values for each alternative as probability distributions. These distributions were used to select an alternative. With the goal of maximizing the probability of correct selection we developed and evaluated, under several different sets of assumptions, procedures to allocate the fixed experimental budget across the multiple attributes and alternatives. Through a series of simulation studies, we compared the performance of these allocation procedures to the simple, but common, allocation procedure that distributed the sample budget equally across the alternatives and attributes. We found the allocation procedures that were developed based on the inclusion of decision-maker knowledge, such as knowledge of the decision model, outperformed those that neglected such information. Beginning with general knowledge of the attribute values provided by Bayesian prior distributions, and updating this knowledge with each observed sample, the sequential allocation procedure performed particularly well. These observations demonstrate that managing projects focused on a selection decision so that the decision modeling and the experimental planning are done jointly, rather than in isolation, can improve the overall selection results.
Resumo:
International audience
Resumo:
The stirring of a body of viscous fluid using multiple stirring rods is known to be particularly effective when the rods trace out a path corresponding to a nontrivial mathematical braid. The optimal braid is the so-called "pigtail braid", in which three stirring rods execute the usual "over-under" motion associated with braiding plaiting) hair. We show how to achieve this optimal braiding motion straightforwardly: one stirring rod is driven in a figure-of-eight motion, while the other two rods are baffles, which rotate episodically about their common centre. We also explore the extent to which the physical baffles may be replaced by flow structures (such as periodic islands).
Resumo:
Background: People with relapsing remitting MS (PwRRMS) suffer disproportionate decrements in gait under dual-task conditions, when walking and a cognitive task are combined. There has been much less investigation of the impact of cognitive demands on balance. This study investigated whether: (1) PwRRMS show disproportionate decrements in postural stability under dual-task conditions compared to healthy controls; (2) dual-task decrements are associated with everyday dual-tasking difficulties. In addition, the impact of mood, fatigue and disease severity on dual-tasking were also examined. Methods: 34 PwRRMS and 34 matched controls completed cognitive (digit span) and balance (movement of centre of pressure on a Biosway, on stable and unstable surfaces) tasks under single and dual-task conditions. Everyday dual-tasking was measured using the DTQ. Mood was measured by the HADS. Fatigue was measured via the MFIS. Results: No differences in age, gender, years of education, estimated pre-morbid IQ or baseline digit span between the groups. Compared to healthy controls, PwRRMS showed a significantly greater decrement in postural stability under dual-task conditions on an unstable surface (p=0.007), but not a stable surface (p=0.679). PwRRMS reported higher levels of everyday dual-tasking difficulties (p<0.001). Balance decrement scores were not correlated with everyday dual-tasking difficulties, or with fatigue. Stable surface balance decrement scores were significantly associated with levels of anxiety (rho=0.527, p=0.001) and depression (rho=0.451, p=0.007). Conclusion: RRMS causes difficulties with dual-tasking, impacting balance, particularly under challenging conditions, which may contribute to an increased risk of gait difficulties and falls. The striking relationship between anxiety/depression and dual-task decrement suggests that worry may be contributing to dual-task difficulties.
Resumo:
The flow rates of drying and nebulizing gas, heat block and desolvation line temperatures and interface voltage are potential electrospray ionization parameters as they may enhance sensitivity of the mass spectrometer. The conditions that give higher sensitivity of 13 pharmaceuticals were explored. First, Plackett-Burman design was implemented to screen significant factors, and it was concluded that interface voltage and nebulizing gas flow were the only factors that influence the intensity signal for all pharmaceuticals. This fractionated factorial design was projected to set a full 2(2) factorial design with center points. The lack-of-fit test proved to be significant. Then, a central composite face-centered design was conducted. Finally, a stepwise multiple linear regression and subsequently an optimization problem solving were carried out. Two main drug clusters were found concerning the signal intensities of all runs of the augmented factorial design. p-Aminophenol, salicylic acid, and nimesulide constitute one cluster as a result of showing much higher sensitivity than the remaining drugs. The other cluster is more homogeneous with some sub-clusters comprising one pharmaceutical and its respective metabolite. It was observed that instrumental signal increased when both significant factors increased with maximum signal occurring when both codified factors are set at level +1. It was also found that, for most of the pharmaceuticals, interface voltage influences the intensity of the instrument more than the nebulizing gas flowrate. The only exceptions refer to nimesulide where the relative importance of the factors is reversed and still salicylic acid where both factors equally influence the instrumental signal. Graphical Abstract ᅟ.
Resumo:
Historically the imaginary and the hegemonic thinking, in the Western globe north, has been marked by the epistemology and capitalists archetypes. Notwithstanding the design seem as a practice and discipline shielded on a simplistic discourse of functional / communicative efficiency, wandering through by multiple aestheticism apparently neutral in relation to the symbolic, but in fact they never are, because what really hapens is that the aesthetic appearance of the generated forms will always be a review of the powers ruling. We start from the understanding that the act of creating an aesthetic artifact, will also be a movement of inscription in a discursive platform (that precedes it), is in itself an narrative act and that fact represent a certain take place in relation to certain symbolic reality. On reflection shown if it sees design as a discipline and / or an instrument of action, whose operational relevance tends to question and simultaneously rehearsing a response, in which more than why interests answer to why. Apparently the design is a content mediator, but also, it is structure, is body, is idea. We think a design praxis as discipline and enrollment tool of critical thought and social transformation. For guiding research in this text, we propose the following question: Can the Design want for themselves an engagement with the symbolic in order to be an active part in the production of critical thinking in the place where it belongs? Methodologically our argument will be present in two differents moments: 1. a first, exploratory nature where we rescue the draw issues in the practice of design and 2. a second analytical nature concerning the subject issues (graphic and / or utility ) design and how it incorporates formal rites, political events and social practices of contemporary everyday life. We consider the praxis of design as a discipline and critical thinking enrollment tool as agents of social transformation. With this study we seek for contribute phenomenology design by studying the artifacts of configuration as well as the possible messages they convey and what impact they may have on the social network.