972 resultados para computational study
Resumo:
Es bien conocido que las pequeñas imperfecciones existentes en los álabes de un rótor de turbomaquinaria (conocidas como “mistuning”) pueden causar un aumento considerable de la amplitud de vibración de la respuesta forzada y, por el contrario, tienen típicamente un efecto beneficioso en el flameo del rótor. Para entender estos efectos se pueden llevar a cabo estudios numéricos del problema aeroelástico completo. Sin embargo, el cálculo de “mistuning” usando modelos de alta resolución es una tarea difícil de realizar, ya que los modelos necesarios para describir de manera precisa el componente de turbomáquina (por ejemplo rotor) tienen, necesariamente, un número muy elevado de grados de libertad, y, además, es necesario hacer un estudio estadístico para poder explorar apropiadamente las distribuciones posibles de “mistuning”, que tienen una naturaleza aleatoria. Diferentes modelos de orden reducido han sido desarrollados en los últimos años para superar este inconveniente. Uno de estos modelos, llamado “Asymptotic Mistuning Model (AMM)”, se deriva de la formulación completa usando técnicas de perturbaciones que se basan en que el “mistuning” es pequeño. El AMM retiene sólo los modos relevantes para describir el efecto del mistuning, y permite identificar los mecanismos clave involucrados en la amplificación de la respuesta forzada y en la estabilización del flameo. En este trabajo, el AMM se usa para estudiar el efecto del “mistuning” de la estructura y de la amortiguación sobre la amplitud de la respuesta forzada. Los resultados obtenidos son validados usando modelos simplificados del rotor y también otros de alta definición. Además, en el marco del proyecto europeo FP7 "Flutter-Free Turbomachinery Blades (FUTURE)", el AMM se aplica para diseñar distribuciones de “mistuning” intencional: (i) una que anula y (ii) otra que reduce a la mitad la amplitud del flameo de un rotor inestable; y las distribuciones obtenidas se validan experimentalmente. Por último, la capacidad de AMM para predecir el comportamiento de flameo de rotores con “mistuning” se comprueba usando resultados de CFD detallados. Abstract It is well known that the small imperfections of the individual blades in a turbomachinery rotor (known as “mistuning”) can cause a substantial increase of the forced response vibration amplitude, and it also typically results in an improvement of the flutter vibration characteristics of the rotor. The understanding of these phenomena can be attempted just by performing numerical simulations of the complete aeroelastic problem. However, the computation of mistuning cases using high fidelity models is a formidable task, because a detailed model of the whole rotor has to be considered, and a statistical study has to be carried out in order to properly explore the effect of the random mistuning distributions. Many reduced order models have been developed in recent years to overcome this barrier. One of these models, called the Asymptotic Mistuning Model (AMM), is systematically derived from the complete bladed disk formulation using a consistent perturbative procedure that exploits the smallness of mistuning to simplify the problem. The AMM retains only the essential system modes that are involved in the mistuning effect, and it allows to identify the key mechanisms of the amplification of the forced response amplitude and the flutter stabilization. In this work, AMM methodolgy is used to study the effect of structural and damping mistuning on the forced response vibration amplitude. The obtained results are verified using a one degree of freedom model of a rotor, and also high fidelity models of the complete rotor. The AMM is also applied, in the frame of the European FP7 project “Flutter-Free Turbomachinery Blades (FUTURE)”, to design two intentional mistuning patterns: (i) one to complete stabilize an unstable rotor, and (ii) other to approximately reduce by half its flutter amplitude. The designed patterns are validated experimentally. Finally, the ability of AMM to predict the flutter behavior of mistuned rotors is checked against numerical, high fidelity CFD results.
Resumo:
Light microscopy of thick biological samples, such as tissues, is often limited by aberrations caused by refractive index variations within the sample itself. This problem is particularly severe for live imaging, a field of great current excitement due to the development of inherently fluorescent proteins. We describe a method of removing such aberrations computationally by mapping the refractive index of the sample using differential interference contrast microscopy, modeling the aberrations by ray tracing through this index map, and using space-variant deconvolution to remove aberrations. This approach will open possibilities to study weakly labeled molecules in difficult-to-image live specimens.
Resumo:
We have demonstrated that it is possible to radically change the specificity of maltose binding protein by converting it into a zinc sensor using a rational design approach. In this new molecular sensor, zinc binding is transduced into a readily detected fluorescence signal by use of an engineered conformational coupling mechanism linking ligand binding to reporter group response. An iterative progressive design strategy led to the construction of variants with increased zinc affinity by combining binding sites, optimizing the primary coordination sphere, and exploiting conformational equilibria. Intermediates in the design series show that the adaptive process involves both introduction and optimization of new functions and removal of adverse vestigial interactions. The latter demonstrates the importance of the rational design approach in uncovering cryptic phenomena in protein function, which cannot be revealed by the study of naturally evolved systems.
Resumo:
This paper presents an algorithm for identifying noun-phrase antecedents of pronouns and adjectival anaphors in Spanish dialogues. We believe that anaphora resolution requires numerous sources of information in order to find the correct antecedent of the anaphor. These sources can be of different kinds, e.g., linguistic information, discourse/dialogue structure information, or topic information. For this reason, our algorithm uses various different kinds of information (hybrid information). The algorithm is based on linguistic constraints and preferences and uses an anaphoric accessibility space within which the algorithm finds the noun phrase. We present some experiments related to this algorithm and this space using a corpus of 204 dialogues. The algorithm is implemented in Prolog. According to this study, 95.9% of antecedents were located in the proposed space, a precision of 81.3% was obtained for pronominal anaphora resolution, and 81.5% for adjectival anaphora.
Resumo:
In this paper we present a study of the computational cost of the GNG3D algorithm for mesh optimization. This algorithm has been implemented taking as a basis a new method which is based on neural networks and consists on two differentiated phases: an optimization phase and a reconstruction phase. The optimization phase is developed applying an optimization algorithm based on the Growing Neural Gas model, which constitutes an unsupervised incremental clustering algorithm. The primary goal of this phase is to obtain a simplified set of vertices representing the best approximation of the original 3D object. In the reconstruction phase we use the information provided by the optimization algorithm to reconstruct the faces thus obtaining the optimized mesh. The computational cost of both phases is calculated, showing some examples.
Resumo:
Information Technology and Communications (ICT) is presented as the main element in order to achieve more efficient and sustainable city resource management, while making sure that the needs of the citizens to improve their quality of life are satisfied. A key element will be the creation of new systems that allow the acquisition of context information, automatically and transparently, in order to provide it to decision support systems. In this paper, we present a novel distributed system for obtaining, representing and providing the flow and movement of people in densely populated geographical areas. In order to accomplish these tasks, we propose the design of a smart sensor network based on RFID communication technologies, reliability patterns and integration techniques. Contrary to other proposals, this system represents a comprehensive solution that permits the acquisition of user information in a transparent and reliable way in a non-controlled and heterogeneous environment. This knowledge will be useful in moving towards the design of smart cities in which decision support on transport strategies, business evaluation or initiatives in the tourism sector will be supported by real relevant information. As a final result, a case study will be presented which will allow the validation of the proposal.
Resumo:
The sustainability strategy in urban spaces arises from reflecting on how to achieve a more habitable city and is materialized in a series of sustainable transformations aimed at humanizing different environments so that they can be used and enjoyed by everyone without exception and regardless of their ability. Modern communication technologies allow new opportunities to analyze efficiency in the use of urban spaces from several points of view: adequacy of facilities, usability, and social integration capabilities. The research presented in this paper proposes a method to perform an analysis of movement accessibility in sustainable cities based on radio frequency technologies and the ubiquitous computing possibilities of the new Internet of Things paradigm. The proposal can be deployed in both indoor and outdoor environments to check specific locations of a city. Finally, a case study in a controlled context has been simulated to validate the proposal as a pre-deployment step in urban environments.
Resumo:
The Iterative Closest Point algorithm (ICP) is commonly used in engineering applications to solve the rigid registration problem of partially overlapped point sets which are pre-aligned with a coarse estimate of their relative positions. This iterative algorithm is applied in many areas such as the medicine for volumetric reconstruction of tomography data, in robotics to reconstruct surfaces or scenes using range sensor information, in industrial systems for quality control of manufactured objects or even in biology to study the structure and folding of proteins. One of the algorithm’s main problems is its high computational complexity (quadratic in the number of points with the non-optimized original variant) in a context where high density point sets, acquired by high resolution scanners, are processed. Many variants have been proposed in the literature whose goal is the performance improvement either by reducing the number of points or the required iterations or even enhancing the complexity of the most expensive phase: the closest neighbor search. In spite of decreasing its complexity, some of the variants tend to have a negative impact on the final registration precision or the convergence domain thus limiting the possible application scenarios. The goal of this work is the improvement of the algorithm’s computational cost so that a wider range of computationally demanding problems from among the ones described before can be addressed. For that purpose, an experimental and mathematical convergence analysis and validation of point-to-point distance metrics has been performed taking into account those distances with lower computational cost than the Euclidean one, which is used as the de facto standard for the algorithm’s implementations in the literature. In that analysis, the functioning of the algorithm in diverse topological spaces, characterized by different metrics, has been studied to check the convergence, efficacy and cost of the method in order to determine the one which offers the best results. Given that the distance calculation represents a significant part of the whole set of computations performed by the algorithm, it is expected that any reduction of that operation affects significantly and positively the overall performance of the method. As a result, a performance improvement has been achieved by the application of those reduced cost metrics whose quality in terms of convergence and error has been analyzed and validated experimentally as comparable with respect to the Euclidean distance using a heterogeneous set of objects, scenarios and initial situations.
Resumo:
The Remez penalty and smoothing algorithm (RPSALG) is a unified framework for penalty and smoothing methods for solving min-max convex semi-infinite programing problems, whose convergence was analyzed in a previous paper of three of the authors. In this paper we consider a partial implementation of RPSALG for solving ordinary convex semi-infinite programming problems. Each iteration of RPSALG involves two types of auxiliary optimization problems: the first one consists of obtaining an approximate solution of some discretized convex problem, while the second one requires to solve a non-convex optimization problem involving the parametric constraints as objective function with the parameter as variable. In this paper we tackle the latter problem with a variant of the cutting angle method called ECAM, a global optimization procedure for solving Lipschitz programming problems. We implement different variants of RPSALG which are compared with the unique publicly available SIP solver, NSIPS, on a battery of test problems.
Resumo:
Hypertrophic cardiomyopathy (HCM) is a cardiovascular disease where the heart muscle is partially thickened and blood flow is - potentially fatally - obstructed. It is one of the leading causes of sudden cardiac death in young people. Electrocardiography (ECG) and Echocardiography (Echo) are the standard tests for identifying HCM and other cardiac abnormalities. The American Heart Association has recommended using a pre-participation questionnaire for young athletes instead of ECG or Echo tests due to considerations of cost and time involved in interpreting the results of these tests by an expert cardiologist. Initially we set out to develop a classifier for automated prediction of young athletes’ heart conditions based on the answers to the questionnaire. Classification results and further in-depth analysis using computational and statistical methods indicated significant shortcomings of the questionnaire in predicting cardiac abnormalities. Automated methods for analyzing ECG signals can help reduce cost and save time in the pre-participation screening process by detecting HCM and other cardiac abnormalities. Therefore, the main goal of this dissertation work is to identify HCM through computational analysis of 12-lead ECG. ECG signals recorded on one or two leads have been analyzed in the past for classifying individual heartbeats into different types of arrhythmia as annotated primarily in the MIT-BIH database. In contrast, we classify complete sequences of 12-lead ECGs to assign patients into two groups: HCM vs. non-HCM. The challenges and issues we address include missing ECG waves in one or more leads and the dimensionality of a large feature-set. We address these by proposing imputation and feature-selection methods. We develop heartbeat-classifiers by employing Random Forests and Support Vector Machines, and propose a method to classify full 12-lead ECGs based on the proportion of heartbeats classified as HCM. The results from our experiments show that the classifiers developed using our methods perform well in identifying HCM. Thus the two contributions of this thesis are the utilization of computational and statistical methods for discovering shortcomings in a current screening procedure and the development of methods to identify HCM through computational analysis of 12-lead ECG signals.
Resumo:
Stroke is a leading cause of death and permanent disability worldwide, affecting millions of individuals. Traditional clinical scores for assessment of stroke-related impairments are inherently subjective and limited by inter-rater and intra-rater reliability, as well as floor and ceiling effects. In contrast, robotic technologies provide objective, highly repeatable tools for quantification of neurological impairments following stroke. KINARM is an exoskeleton robotic device that provides objective, reliable tools for assessment of sensorimotor, proprioceptive and cognitive brain function by means of a battery of behavioral tasks. As such, KINARM is particularly useful for assessment of neurological impairments following stroke. This thesis introduces a computational framework for assessment of neurological impairments using the data provided by KINARM. This is done by achieving two main objectives. First, to investigate how robotic measurements can be used to estimate current and future abilities to perform daily activities for subjects with stroke. We are able to predict clinical scores related to activities of daily living at present and future time points using a set of robotic biomarkers. The findings of this analysis provide a proof of principle that robotic evaluation can be an effective tool for clinical decision support and target-based rehabilitation therapy. The second main objective of this thesis is to address the emerging problem of long assessment time, which can potentially lead to fatigue when assessing subjects with stroke. To address this issue, we examine two time reduction strategies. The first strategy focuses on task selection, whereby KINARM tasks are arranged in a hierarchical structure so that an earlier task in the assessment procedure can be used to decide whether or not subsequent tasks should be performed. The second strategy focuses on time reduction on the longest two individual KINARM tasks. Both reduction strategies are shown to provide significant time savings, ranging from 30% to 90% using task selection and 50% using individual task reductions, thereby establishing a framework for reduction of assessment time on a broader set of KINARM tasks. All in all, findings of this thesis establish an improved platform for diagnosis and prognosis of stroke using robot-based biomarkers.
Resumo:
Primary sex determination in placental mammals is a very well studied developmental process. Here, we aim to investigate the currently established scenario and to assess its adequacy to fully recover the observed phenotypes, in the wild type and perturbed situations. Computational modelling allows clarifying network dynamics, elucidating crucial temporal constrains as well as interplay between core regulatory modules.
Resumo:
The eardrum separates the external ear from the middle ear and it is responsible to convert the acoustical energy into mechanical energy. It is divided by pars tensa and pars flaccida. The aim of this work is to analyze the susceptibility of the four quadrants of the pars tensa under negative pressure, to different lamina propria fibers distribution. The development of associated ear pathology, in particular the formation of retraction pockets, is also evaluated. To analyze these effects, a computational biomechanical model of the tympano-ossicular chain was constructed using computerized tomography images and based on the finite element method. Three fibers distributions in the eardrum middle layer were compared: case 1 (eardrum with a circular band of fibers surrounding all quadrants equally), case 2 (eardrum with a circular band of fibers that decreases in thickness in posterior quadrants), case 3 (eardrum without circular fibers in the posterior/superior quadrant). A static analysis was performed by applying approximately 3000Pa in the eardrum. The pars tensa of the eardrum was divided in four quadrants and the displacement of a central point of each quadrant analyzed. The largest displacements of the eardrum were obtained for the eardrum without circular fibers in the posterior/superior quadrant.
Resumo:
The eardrum separates the external ear from the middle ear and it is responsible to convert the acoustical energy into mechanical energy. It is divided by pars tensa and pars flaccida. The aim of this work is to analyze the susceptibility of the four quadrants of the pars tensa under negative pressure, to different lamina propria fibers distribution. The development of associated ear pathology, in particular the formation of retraction pockets, is also evaluated. To analyze these effects, a computational biomechanical model of the tympano-ossicular chain was constructed using computerized tomography images and based on the finite element method. Three fibers distributions in the eardrum middle layer were compared: case 1 (eardrum with a circular band of fibers surrounding all quadrants equally), case 2 (eardrum with a circular band of fibers that decreases in thickness in posterior quadrants), case 3 (eardrum without circular fibers in the posterior/superior quadrant). A static analysis was performed by applying approximately 3000Pa in the eardrum. The pars tensa of the eardrum was divided in four quadrants and the displacement of a central point of each quadrant analyzed. The largest displacements of the eardrum were obtained for the eardrum without circular fibers in the posterior/superior quadrant.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06