822 resultados para distance-metrics
Resumo:
The objective of this paper is to provide performance metrics for small-signal stability assessment of a given system architecture. The stability margins are stated utilizing a concept of maximum peak criteria (MPC) derived from the behavior of an impedance-based sensitivity function. For each minor-loop gain defined at every system interface, a single number to state the robustness of stability is provided based on the computed maximum value of the corresponding sensitivity function. In order to compare various power-architecture solutions in terms of stability, a parameter providing an overall measure of the whole system stability is required. The selected figure of merit is geometric average of each maximum peak value within the system. It provides a meaningful metrics for system comparisons: the best system in terms of robust stability is the one that minimizes this index. In addition, the largest peak value within the system interfaces is given thus detecting the weakest point of the system in terms of robustness.
Resumo:
This is an account of some aspects of the geometry of Kahler affine metrics based on considering them as smooth metric measure spaces and applying the comparison geometry of Bakry-Emery Ricci tensors. Such techniques yield a version for Kahler affine metrics of Yau s Schwarz lemma for volume forms. By a theorem of Cheng and Yau, there is a canonical Kahler affine Einstein metric on a proper convex domain, and the Schwarz lemma gives a direct proof of its uniqueness up to homothety. The potential for this metric is a function canonically associated to the cone, characterized by the property that its level sets are hyperbolic affine spheres foliating the cone. It is shown that for an n -dimensional cone, a rescaling of the canonical potential is an n -normal barrier function in the sense of interior point methods for conic programming. It is explained also how to construct from the canonical potential Monge-Ampère metrics of both Riemannian and Lorentzian signatures, and a mean curvature zero conical Lagrangian submanifold of the flat para-Kahler space.
Resumo:
Sight distance plays an important role in road traffic safety. Two types of Digital Elevation Models (DEMs) are utilized for the estimation of available sight distance in roads: Digital Terrain Models (DTMs) and Digital Surface Models (DSMs). DTMs, which represent the bare ground surface, are commonly used to determine available sight distance at the design stage. Additionally, the use of DSMs provides further information about elements by the roadsides such as trees, buildings, walls or even traffic signals which may reduce available sight distance. This document analyses the influence of three classes of DEMs in available sight distance estimation. For this purpose, diverse roads within the Region of Madrid (Spain) have been studied using software based on geographic information systems. The study evidences the influence of using each DEM in the outcome as well as the pros and cons of using each model.
Resumo:
Three-dimensional kinematic analysis provides quantitative assessment of upper limb motion and is used as an outcome measure to evaluate movement disorders. The aim of the present study is to present a set of kinematic metrics for quantifying characteristics of movement performance and the functional status of the subject during the execution of the activity of daily living (ADL) of drinking from a glass. Then, the objective is to apply these metrics in healthy people and a population with cervical spinal cord injury (SCI), and to analyze the metrics ability to discriminate between healthy and pathologic people. 19 people participated in the study: 7 subjects with metameric level C6 tetraplegia, 4 subjects with metameric level C7 tetraplegia and 8 healthy subjects. The movement was recorded with a photogrammetry system. The ADL of drinking was divided into a series of clearly identifiable phases to facilitate analysis. Metrics describing the time of the reaching phase, the range of motion of the joints analyzed, and characteristics of movement performance such as the efficiency, accuracy and smoothness of the distal segment and inter-joint coordination were obtained. The performance of the drinking task was more variable in people with SCI compared to the control group in relation to the metrics measured. Reaching time was longer in SCI groups. The proposed metrics showed capability to discriminate between healthy and pathologic people. Relative deficits in efficiency were larger in SCI people than in controls. These metrics can provide useful information in a clinical setting about the quality of the movement performed by healthy and SCI people during functional activities.
Resumo:
Canonical Correlation Analysis for Interpreting Airborne Laser Scanning Metrics along the Lorenz Curve of Tree Size Inequality
Resumo:
The aim of this study was to compare the race characteristics of the start and turn segments of national and regional level swimmers. In the study, 100 and 200-m events were analysed during the finals session of the Open Comunidad de Madrid (Spain) tournament. The “individualized-distance” method with two-dimensional direct linear transformation algorithm was used to perform race analyses. National level swimmers obtained faster velocities in all race segments and stroke comparisons,although significant inter-level differences in start velocity were only obtained in half (8 out of 16) of the analysed events. Higher level swimmers also travelled for longer start and turn distances but only in the race segments where the gain of speed was high. This was observed in the turn segments, in the backstroke and butterfly strokes and during the 200-m breaststroke event, but not in any of the freestyle events. Time improvements due to the appropriate extension of the underwater subsections appeared to be critical for the end race result and should be carefully evaluated by the “individualized-distance” method.
Resumo:
The present paper describes the preliminary stages of the development of a new, comprehensive model conceived to simulate the evacuation of transport airplanes in certification studies. Two previous steps were devoted to implementing an efficient procedure to define the whole geometry of the cabin, and setting up an algorithm for assigning seats to available exits. Now, to clarify the role of the cabin arrangement in the evacuation process, the paper addresses the influence of several restrictions on the seat-to-exit assignment algorithm, maintaining a purely geometrical approach for consistency. Four situations are considered: first, an assignment method without limitations to search the minimum for the total distance run by all passengers along their escaping paths; second, a protocol that restricts the number of evacuees through each exit according to updated FAR 25 capacity; third, a procedure which tends to the best proportional sharing among exits but obliges to each passenger to egress through the nearest fore or rear exits; and fourth, a scenario which includes both restrictions. The four assignment strategies are applied to turboprops, and narrow body and wide body jets. Seat to exit distance and number of evacuees per exit are the main output variables. The results show the influence of airplane size and the impact of non-symmetries and inappropriate matching between size and longitudinal location of exits.
Resumo:
The Institute of Tropical Medicine in Antwerp hereby presents the results of two pilot distance learning training programmes, developed under the umbrella of the AFRICA BUILD project (FP7). The two courses focused on evidence-based medicine (EBM): with the aim of enhancing research and education, via novel approaches and to identify research needs emanating from the field. These pilot experiences, which were run both in English-speaking (Ghana), and French-speaking (Mali and Cameroon) partner institutions, produced targeted courses for the strengthening of research methodology and policy. The courses and related study materials are in the public domain and available through the AFRICA BUILD Portal (http://www.africabuild.eu/taxonomy/term/37); the training modules were delivered live via Dudal webcasts. This paper assesses the success and difficulties of transferring EBM skills with these two specific training programmes, offered through three different approaches: fully online facultative courses, fully online tutor supported courses or through a blended approach with both online and face-to-face sessions. Key factors affecting the selection of participants, the accessibility of the courses, how the learning resources are offered, and how interactive online communities are formed, are evaluated and discussed.
Resumo:
Because of the high number of crashes occurring on highways, it is necessary to intensify the search for new tools that help in understanding their causes. This research explores the use of a geographic information system (GIS) for an integrated analysis, taking into account two accident-related factors: design consistency (DC) (based on vehicle speed) and available sight distance (ASD) (based on visibility). Both factors require specific GIS software add-ins, which are explained. Digital terrain models (DTMs), vehicle paths, road centerlines, a speed prediction model, and crash data are integrated in the GIS. The usefulness of this approach has been assessed through a study of more than 500 crashes. From a regularly spaced grid, the terrain (bare ground) has been modeled through a triangulated irregular network (TIN). The length of the roads analyzed is greater than 100 km. Results have shown that DC and ASD could be related to crashes in approximately 4% of cases. In order to illustrate the potential of GIS, two crashes are fully analyzed: a car rollover after running off road on the right side and a rear-end collision of two moving vehicles. Although this procedure uses two software add-ins that are available only for ArcGIS, the study gives a practical demonstration of the suitability of GIS for conducting integrated studies of road safety.
Resumo:
Context: Measurement is crucial and important to empirical software engineering. Although reliability and validity are two important properties warranting consideration in measurement processes, they may be influenced by random or systematic error (bias) depending on which metric is used. Aim: Check whether, the simple subjective metrics used in empirical software engineering studies are prone to bias. Method: Comparison of the reliability of a family of empirical studies on requirements elicitation that explore the same phenomenon using different design types and objective and subjective metrics. Results: The objectively measured variables (experience and knowledge) tend to achieve more reliable results, whereas subjective metrics using Likert scales (expertise and familiarity) tend to be influenced by systematic error or bias. Conclusions: Studies that predominantly use variables measured subjectively, like opinion polls or expert opinion acquisition.
Resumo:
The study of the temperature gradients in cold stores and containers is a critical issue in the food industry for the quality assurance of products during transport and for minimising losses. This work presents an analysis of the temperatures during the refrigerated transport of 4,320 kg of blueberries in a reefer (set point temperature at ?1ºC) on a container ship from Montevideo (Uruguay) to Verona (Italy). The monitoring was performed by using semi-passive RFID loggers (TurboTag cards). The objective was to carry out a multi-distributed supervision using low-cost, wireless and autonomous sensors for the characterisation of the distribution and spatial gradients of temperatures during a long distance transport. Data analysis shows spatial (phase space) and temporal sequencing diagrams and reveals a significant heterogeneity of temperature at different locations in the container, which highlights the ineffectiveness of a temperature control system based on a single sensor, as is usually done.
Resumo:
La medida de calidad de vídeo sigue siendo necesaria para definir los criterios que caracterizan una señal que cumpla los requisitos de visionado impuestos por el usuario. Las nuevas tecnologías, como el vídeo 3D estereoscópico o formatos más allá de la alta definición, imponen nuevos criterios que deben ser analizadas para obtener la mayor satisfacción posible del usuario. Entre los problemas detectados durante el desarrollo de esta tesis doctoral se han determinado fenómenos que afectan a distintas fases de la cadena de producción audiovisual y tipo de contenido variado. En primer lugar, el proceso de generación de contenidos debe encontrarse controlado mediante parámetros que eviten que se produzca el disconfort visual y, consecuentemente, fatiga visual, especialmente en lo relativo a contenidos de 3D estereoscópico, tanto de animación como de acción real. Por otro lado, la medida de calidad relativa a la fase de compresión de vídeo emplea métricas que en ocasiones no se encuentran adaptadas a la percepción del usuario. El empleo de modelos psicovisuales y diagramas de atención visual permitirían ponderar las áreas de la imagen de manera que se preste mayor importancia a los píxeles que el usuario enfocará con mayor probabilidad. Estos dos bloques se relacionan a través de la definición del término saliencia. Saliencia es la capacidad del sistema visual para caracterizar una imagen visualizada ponderando las áreas que más atractivas resultan al ojo humano. La saliencia en generación de contenidos estereoscópicos se refiere principalmente a la profundidad simulada mediante la ilusión óptica, medida en términos de distancia del objeto virtual al ojo humano. Sin embargo, en vídeo bidimensional, la saliencia no se basa en la profundidad, sino en otros elementos adicionales, como el movimiento, el nivel de detalle, la posición de los píxeles o la aparición de caras, que serán los factores básicos que compondrán el modelo de atención visual desarrollado. Con el objetivo de detectar las características de una secuencia de vídeo estereoscópico que, con mayor probabilidad, pueden generar disconfort visual, se consultó la extensa literatura relativa a este tema y se realizaron unas pruebas subjetivas preliminares con usuarios. De esta forma, se llegó a la conclusión de que se producía disconfort en los casos en que se producía un cambio abrupto en la distribución de profundidades simuladas de la imagen, aparte de otras degradaciones como la denominada “violación de ventana”. A través de nuevas pruebas subjetivas centradas en analizar estos efectos con diferentes distribuciones de profundidades, se trataron de concretar los parámetros que definían esta imagen. Los resultados de las pruebas demuestran que los cambios abruptos en imágenes se producen en entornos con movimientos y disparidades negativas elevadas que producen interferencias en los procesos de acomodación y vergencia del ojo humano, así como una necesidad en el aumento de los tiempos de enfoque del cristalino. En la mejora de las métricas de calidad a través de modelos que se adaptan al sistema visual humano, se realizaron también pruebas subjetivas que ayudaron a determinar la importancia de cada uno de los factores a la hora de enmascarar una determinada degradación. Los resultados demuestran una ligera mejora en los resultados obtenidos al aplicar máscaras de ponderación y atención visual, los cuales aproximan los parámetros de calidad objetiva a la respuesta del ojo humano. ABSTRACT Video quality assessment is still a necessary tool for defining the criteria to characterize a signal with the viewing requirements imposed by the final user. New technologies, such as 3D stereoscopic video and formats of HD and beyond HD oblige to develop new analysis of video features for obtaining the highest user’s satisfaction. Among the problems detected during the process of this doctoral thesis, it has been determined that some phenomena affect to different phases in the audiovisual production chain, apart from the type of content. On first instance, the generation of contents process should be enough controlled through parameters that avoid the occurrence of visual discomfort in observer’s eye, and consequently, visual fatigue. It is especially necessary controlling sequences of stereoscopic 3D, with both animation and live-action contents. On the other hand, video quality assessment, related to compression processes, should be improved because some objective metrics are adapted to user’s perception. The use of psychovisual models and visual attention diagrams allow the weighting of image regions of interest, giving more importance to the areas which the user will focus most probably. These two work fields are related together through the definition of the term saliency. Saliency is the capacity of human visual system for characterizing an image, highlighting the areas which result more attractive to the human eye. Saliency in generation of 3DTV contents refers mainly to the simulated depth of the optic illusion, i.e. the distance from the virtual object to the human eye. On the other hand, saliency is not based on virtual depth, but on other features, such as motion, level of detail, position of pixels in the frame or face detection, which are the basic features that are part of the developed visual attention model, as demonstrated with tests. Extensive literature involving visual comfort assessment was looked up, and the development of new preliminary subjective assessment with users was performed, in order to detect the features that increase the probability of discomfort to occur. With this methodology, the conclusions drawn confirmed that one common source of visual discomfort was when an abrupt change of disparity happened in video transitions, apart from other degradations, such as window violation. New quality assessment was performed to quantify the distribution of disparities over different sequences. The results confirmed that abrupt changes in negative parallax environment produce accommodation-vergence mismatches derived from the increasing time for human crystalline to focus the virtual objects. On the other side, for developing metrics that adapt to human visual system, additional subjective tests were developed to determine the importance of each factor, which masks a concrete distortion. Results demonstrated slight improvement after applying visual attention to objective metrics. This process of weighing pixels approximates the quality results to human eye’s response.
Resumo:
Photon bursts from single diffusing donor-acceptor labeled macromolecules were used to measure intramolecular distances and identify subpopulations of freely diffusing macromolecules in a heterogeneous ensemble. By using DNA as a rigid spacer, a series of constructs with varying intramolecular donor-acceptor spacings were used to measure the mean and distribution width of fluorescence resonance energy transfer (FRET) efficiencies as a function of distance. The mean single-pair FRET efficiencies qualitatively follow the distance dependence predicted by Förster theory. Possible contributions to the widths of the FRET efficiency distributions are discussed, and potential applications in the study of biopolymer conformational dynamics are suggested. The ability to measure intramolecular (and intermolecular) distances for single molecules implies the ability to distinguish and monitor subpopulations of molecules in a mixture with different distances or conformational states. This is demonstrated by monitoring substrate and product subpopulations before and after a restriction endonuclease cleavage reaction. Distance measurements at single-molecule resolution also should facilitate the study of complex reactions such as biopolymer folding. To this end, the denaturation of a DNA hairpin was examined by using single-pair FRET.
Resumo:
Acknowledgements This study received no specific funding. The study involved the analysis of data collected routinely as part of the national AAA screening programme in Scotland.
Resumo:
Enhancers are defined by their ability to stimulate gene activity from remote sites and their requirement for promoter-proximal upstream activators to activate transcription. Here we demonstrate that recruitment of the p300/CBP-associated factor PCAF to a reporter gene is sufficient to stimulate promoter activity. The PCAF-mediated stimulation of transcription from either a distant or promoter-proximal position depends on the presence of an upstream activator (Sp1). These data suggest that acetyltransferase activity may be a primary component of enhancer function, and that recruitment of polymerase and enhancement of transcription are separable. Transcriptional activation by PCAF requires both its acetyltransferase activity and an additional activity within its N terminus. We also show that the simian virus 40 enhancer and PCAF itself are sufficient to counteract Mad-mediated repression. These results are compatible with recent models in which gene activity is regulated by the competition between deacetylase-mediated repression and enhancer-mediated recruitment of acetyltransferases.