894 resultados para System Performance Measures.
Resumo:
This thesis consists of four essays on the design and disclosure of compensation contracts. Essays 1, 2 and 3 focus on behavioral aspects of mandatory compensation disclosure rules and of contract negotiations in agency relationships. The three experimental studies develop psychology- based theory and present results that deviate from standard economic predictions. Furthermore, the results of Essay 1 and 2 also have implications for firms’ discretion in how to communicate their top management’s incentives to the capital market. Essay 4 analyzes the role of fairness perceptions for the evaluation of executive compensation. For this purpose, two surveys targeting representative eligible voters as well as investment professionals were conducted. Essay 1 investigates the role of the detailed ‘Compensation Discussion and Analysis’, which is part of the Security and Exchange Commission’s 2006 regulation, on investors’ evaluations of executive performance. Compensation disclosure complying with this regulation clarifies the relationship between realized reported compensation and the underlying performance measures and their target achievement levels. The experimental findings suggest that the salient presentation of executives’ incentives inherent in the ‘Compensation Discussion and Analysis’ makes investors’ performance evaluations less outcome dependent. Therefore, investors’ judgment and investment decisions might be less affected by noisy environmental factors that drive financial performance. The results also suggest that fairness perceptions of compensation contracts are essential for investors’ performance evaluations in that more transparent disclosure increases the perceived fairness of compensation and the performance evaluation of managers who are not responsible for a bad financial performance. These results have important practical implications as firms might choose to communicate their top management’s incentive compensation more transparently in order to benefit from less volatile expectations about their future performance. Similar to the first experiment, the experiment described in Essay 2 addresses the question of more transparent compensation disclosure. However, other than the first experiment, the second experiment does not analyze the effect of a more salient presentation of contract information but the informational effect of contract information itself. For this purpose, the experiment tests two conditions in which the assessment of the compensation contracts’ incentive compatibility, which determines executive effort, is either possible or not. On the one hand, the results suggest that the quality of investors’ expectations about executive effort is improved, but on the other hand investors might over-adjust their prior expectations about executive effort if being confronted with an unexpected financial performance and under-adjust if the financial performance confirms their prior expectations. Therefore, in the experiment, more transparent compensation disclosure does not lead to more correct overall judgments of executive effort and to even lower processing quality of outcome information. These results add to the literature on disclosure which predominantly advocates more transparency. The findings of the experiment however, identify decreased information processing quality as a relevant disclosure cost category. Firms might therefore carefully evaluate the additional costs and benefits of more transparent compensation disclosure. Together with the results from the experiment in Essay 1, the two experiments on compensation disclosure imply that firms should rather focus on their discretion how to present their compensation disclosure to benefit from investors’ improved fairness perceptions and their spill-over on performance evaluation. Essay 3 studies the behavioral effects of contextual factors in recruitment processes that do not affect the employer’s or the applicant’s bargaining power from a standard economic perspective. In particular, the experiment studies two common characteristics of recruitment processes: Pre-contractual competition among job applicants and job applicants’ non-binding effort announcements as they might be made during job interviews. Despite the standard economic irrelevance of these factors, the experiment develops theory regarding the behavioral effects on employees’ subsequent effort provision and the employers’ contract design choices. The experimental findings largely support the predictions. More specifically, the results suggest that firms can benefit from increased effort and, therefore, may generate higher profits. Further, firms may seize a larger share of the employment relationship’s profit by highlighting the competitive aspects of the recruitment process and by requiring applicants to make announcements about their future effort. Finally, Essay 4 studies the role of fairness perceptions for the public evaluation of executive compensation. Although economic criteria for the design of incentive compensation generally do not make restrictive recommendations with regard to the amount of compensation, fairness perceptions might be relevant from the perspective of firms and standard setters. This is because behavioral theory has identified fairness as an important determinant of individuals’ judgment and decisions. However, although fairness concerns about executive compensation are often stated in the popular media and even in the literature, evidence on the meaning of fairness in the context of executive compensation is scarce and ambiguous. In order to inform practitioners and standard setters whether fairness concerns are exclusive to non-professionals or relevant for investment professionals as well, the two surveys presented in Essay 4 aim to find commonalities in the opinions of representative eligible voters and investments professionals. The results suggest that fairness is an important criterion for both groups. Especially, exposure to risk in the form of the variable compensation share is an important criterion shared by both groups. The higher the assumed variable share, the higher is the compensation amount to be perceived as fair. However, to a large extent, opinions on executive compensation depend on personality characteristics, and to some extent, investment professionals’ perceptions deviate systematically from those of non-professionals. The findings imply that firms might benefit from emphasizing the riskiness of their managers’ variable pay components and, therefore, the findings are also in line with those of Essay 1.
Resumo:
Several techniques have been proposed to exploit GNSS-derived kinematic orbit information for the determination of long-wavelength gravity field features. These methods include the (i) celestial mechanics approach, (ii) short-arc approach, (iii) point-wise acceleration approach, (iv) averaged acceleration approach, and (v) energy balance approach. Although there is a general consensus that—except for energy balance—these methods theoretically provide equivalent results, real data gravity field solutions from kinematic orbit analysis have never been evaluated against each other within a consistent data processing environment. This contribution strives to close this gap. Target consistency criteria for our study are the input data sets, period of investigation, spherical harmonic resolution, a priori gravity field information, etc. We compare GOCE gravity field estimates based on the aforementioned approaches as computed at the Graz University of Technology, the University of Bern, the University of Stuttgart/Austrian Academy of Sciences, and by RHEA Systems for the European Space Agency. The involved research groups complied with most of the consistency criterions. Deviations only occur where technical unfeasibility exists. Performance measures include formal errors, differences with respect to a state-of-the-art GRACE gravity field, (cumulative) geoid height differences, and SLR residuals from precise orbit determination of geodetic satellites. We found that for the approaches (i) to (iv), the cumulative geoid height differences at spherical harmonic degree 100 differ by only ≈10 % ; in the absence of the polar data gap, SLR residuals agree by ≈96 % . From our investigations, we conclude that real data analysis results are in agreement with the theoretical considerations concerning the (relative) performance of the different approaches.
Resumo:
Objectives. The central objective of this study was to systematically examine the internal structure of multihospital systems, determining the management principles used and the performance levels achieved in medical care and administrative areas.^ The Universe. The study universe consisted of short-term general American hospitals owned and operated by multihospital corporations. Corporations compared were the investor-owned (for-profit) and the voluntary multihospital systems. The individual hospital was the unit of analysis for the study.^ Theoretical Considerations. The contingency theory, using selected aspects of the classical and human relations schools of thought, seemed well suited to describe multihospital organization and was used in this research.^ The Study Hypotheses. The main null hypotheses generated were that there are no significant differences between the voluntary and the investor-owned multihospital sectors in their (1) hospital structures and (2) patient care and administrative performance levels.^ The Sample. A stratified random sample of 212 hospitals owned by multihospital systems was selected to equally represent the two study sectors. Of the sampled hospitals approached, 90.1% responded.^ The Analysis. Sixteen scales were constructed in conjunction with 16 structural variables developed from the major questions and sub-items of the questionnaire. This was followed by analysis of an additional 7 structural and 24 effectiveness (performance) measures, using frequency distributions. Finally, summary statistics and statistical testing for each variable and sub-items were completed and recorded in 38 tables.^ Study Findings. While it has been argued that there are great differences between the two sectors, this study found that with a few exceptions the null hypotheses of no difference in organizational and operational characteristics of non-profit and for-profit hospitals was accepted. However, there were several significant differences found in the structural variables: functional specialization, and autonomy were significantly higher in the voluntary sector. Only centralization was significantly different in the investor owned. Among the effectiveness measures, occupancy rate, cost of data processing, total manhours worked, F.T.E. ratios, and personnel per occupied bed were significantly higher in the voluntary sector. The findings indicated that both voluntary and for-profit systems were converging toward a common hierarchical corporate management approach. Factors of size and management style may be better descriptors to characterize a specific multihospital group than its profit or nonprofit status. (Abstract shortened with permission of author.) ^
Resumo:
The Data Quality Campaign (DQC) has been focused since 2005 on advocating for states to build robust state longitudinal data systems (SLDS). While states have made great progress in their data infrastructure, and should continue to emphasize this work, t data systems alone will not improve outcomes. It is time for both DQC and states to focus on building capacity to use the information that these systems are producing at every level – from classrooms to state houses. To impact system performance and student achievement, the ingrained culture must be replaced with one that focuses on data use for continuous improvement. The effective use of data to inform decisions, provide transparency, improve the measurement of outcomes, and fuel continuous improvement will not come to fruition unless there is a system wide focus on building capacity around the collection, analysis, dissemination, and use of this data, including through research.
Resumo:
The first manuscript, entitled "Time-Series Analysis as Input for Clinical Predictive Modeling: Modeling Cardiac Arrest in a Pediatric ICU" lays out the theoretical background for the project. There are several core concepts presented in this paper. First, traditional multivariate models (where each variable is represented by only one value) provide single point-in-time snapshots of patient status: they are incapable of characterizing deterioration. Since deterioration is consistently identified as a precursor to cardiac arrests, we maintain that the traditional multivariate paradigm is insufficient for predicting arrests. We identify time series analysis as a method capable of characterizing deterioration in an objective, mathematical fashion, and describe how to build a general foundation for predictive modeling using time series analysis results as latent variables. Building a solid foundation for any given modeling task involves addressing a number of issues during the design phase. These include selecting the proper candidate features on which to base the model, and selecting the most appropriate tool to measure them. We also identified several unique design issues that are introduced when time series data elements are added to the set of candidate features. One such issue is in defining the duration and resolution of time series elements required to sufficiently characterize the time series phenomena being considered as candidate features for the predictive model. Once the duration and resolution are established, there must also be explicit mathematical or statistical operations that produce the time series analysis result to be used as a latent candidate feature. In synthesizing the comprehensive framework for building a predictive model based on time series data elements, we identified at least four classes of data that can be used in the model design. The first two classes are shared with traditional multivariate models: multivariate data and clinical latent features. Multivariate data is represented by the standard one value per variable paradigm and is widely employed in a host of clinical models and tools. These are often represented by a number present in a given cell of a table. Clinical latent features derived, rather than directly measured, data elements that more accurately represent a particular clinical phenomenon than any of the directly measured data elements in isolation. The second two classes are unique to the time series data elements. The first of these is the raw data elements. These are represented by multiple values per variable, and constitute the measured observations that are typically available to end users when they review time series data. These are often represented as dots on a graph. The final class of data results from performing time series analysis. This class of data represents the fundamental concept on which our hypothesis is based. The specific statistical or mathematical operations are up to the modeler to determine, but we generally recommend that a variety of analyses be performed in order to maximize the likelihood that a representation of the time series data elements is produced that is able to distinguish between two or more classes of outcomes. The second manuscript, entitled "Building Clinical Prediction Models Using Time Series Data: Modeling Cardiac Arrest in a Pediatric ICU" provides a detailed description, start to finish, of the methods required to prepare the data, build, and validate a predictive model that uses the time series data elements determined in the first paper. One of the fundamental tenets of the second paper is that manual implementations of time series based models are unfeasible due to the relatively large number of data elements and the complexity of preprocessing that must occur before data can be presented to the model. Each of the seventeen steps is analyzed from the perspective of how it may be automated, when necessary. We identify the general objectives and available strategies of each of the steps, and we present our rationale for choosing a specific strategy for each step in the case of predicting cardiac arrest in a pediatric intensive care unit. Another issue brought to light by the second paper is that the individual steps required to use time series data for predictive modeling are more numerous and more complex than those used for modeling with traditional multivariate data. Even after complexities attributable to the design phase (addressed in our first paper) have been accounted for, the management and manipulation of the time series elements (the preprocessing steps in particular) are issues that are not present in a traditional multivariate modeling paradigm. In our methods, we present the issues that arise from the time series data elements: defining a reference time; imputing and reducing time series data in order to conform to a predefined structure that was specified during the design phase; and normalizing variable families rather than individual variable instances. The final manuscript, entitled: "Using Time-Series Analysis to Predict Cardiac Arrest in a Pediatric Intensive Care Unit" presents the results that were obtained by applying the theoretical construct and its associated methods (detailed in the first two papers) to the case of cardiac arrest prediction in a pediatric intensive care unit. Our results showed that utilizing the trend analysis from the time series data elements reduced the number of classification errors by 73%. The area under the Receiver Operating Characteristic curve increased from a baseline of 87% to 98% by including the trend analysis. In addition to the performance measures, we were also able to demonstrate that adding raw time series data elements without their associated trend analyses improved classification accuracy as compared to the baseline multivariate model, but diminished classification accuracy as compared to when just the trend analysis features were added (ie, without adding the raw time series data elements). We believe this phenomenon was largely attributable to overfitting, which is known to increase as the ratio of candidate features to class examples rises. Furthermore, although we employed several feature reduction strategies to counteract the overfitting problem, they failed to improve the performance beyond that which was achieved by exclusion of the raw time series elements. Finally, our data demonstrated that pulse oximetry and systolic blood pressure readings tend to start diminishing about 10-20 minutes before an arrest, whereas heart rates tend to diminish rapidly less than 5 minutes before an arrest.
Resumo:
The expected changes on rainfall in the next decades may cause significant changes of the hydroperiod of temporary wetlands and, consequently, shifts on plant community distributions. Predicting plant community responses to changes in the hydroperiod is a key issue for conservation and management of temporary wetlands. We present a predictive distribution model for Arthrocnemum macrostachyum communities in the Doñana wetland (Southern Spain). Logistic regression was used to fit the model using the number of days of inundation and the mean water height as predictors. The internal validation of the model yielded good performance measures. The model was applied to a set of expected scenarios of changes in the hydroperiod to anticipate the most likely shifts in the distribution of Arthrocnemum macrostachyum communities.
Resumo:
• Objectives of HALA! • Main Activities • HALA! Magement Team • Participants • Intended Audience • Heritage in ATM and Automation • The new paradigm shift in Automation in ATM • Overall system performance as main driver for ATM Automation • The three interdependent dimensions for the paradigm change. • New roles assignment based on : • “best time” • “decision place” • “best player” • HALA! main research areas
Resumo:
In this paper we present a scalable software architecture for on-line multi-camera video processing, that guarantees a good trade off between computational power, scalability and flexibility. The software system is modular and its main blocks are the Processing Units (PUs), and the Central Unit. The Central Unit works as a supervisor of the running PUs and each PU manages the acquisition phase and the processing phase. Furthermore, an approach to easily parallelize the desired processing application has been presented. In this paper, as case study, we apply the proposed software architecture to a multi-camera system in order to efficiently manage multiple 2D object detection modules in a real-time scenario. System performance has been evaluated under different load conditions such as number of cameras and image sizes. The results show that the software architecture scales well with the number of camera and can easily works with different image formats respecting the real time constraints. Moreover, the parallelization approach can be used in order to speed up the processing tasks with a low level of overhead
Resumo:
In October 2002, under the auspices of Spanish Cooperation, a pilot electrification project put into operation two centralised PV-diesel hybrid systems in two different Moroccan villages. These systems currently provide a full-time energy service and supply electricity to more than a hundred of families, six community buildings, street lighting and one running water system. The appearance of the electricity service is very similar to an urban one: one phase AC supply (230V/50Hz) distributed up to each dwelling using a low-voltage mini-grid, which has been designed to be fully compatible with a future arrival of the utility grid. The management of this electricity service is based on a “fee-for-service” scheme agreed between a local NGO, partner of the project, and electricity associations created in each village, which are in charge of, among other tasks, recording the daily energy production of systems and the monthly energy consumption of each house. This register of data allows a systematic evaluation of both the system performance and the energy consumption of users. Now, after four years of operation, this paper presents the experience of this pilot electrification project and draws lessons that can be useful for designing, managing and sizing this type of small village PV-hybrid system
Resumo:
Los problemas de programación de tareas son muy importantes en el mundo actual. Se puede decir que se presentan en todos los fundamentos de la industria moderna, de ahí la importancia de que estos sean óptimos, de forma que se puedan ahorrar recursos que estén asociados al problema. La programación adecuada de trabajos en procesos de manufactura, constituye un importante problema que se plantea dentro de la producción en muchas empresas. El orden en que estos son procesados, no resulta indiferente, sino que determinará algún parámetro de interés, cuyos valores convendrá optimizar en la medida de lo posible. Así podrá verse afectado el coste total de ejecución de los trabajos, el tiempo necesario para concluirlos o el stock de productos en curso que será generado. Esto conduce de forma directa al problema de determinar cuál será el orden más adecuado para llevar a cabo los trabajos con vista a optimizar algunos de los anteriores parámetros u otros similares. Debido a las limitaciones de las técnicas de optimización convencionales, en la presente tesis se presenta una metaheurística basada en un Algoritmo Genético Simple (Simple Genetic Algorithm, SGA), para resolver problemas de programación de tipo flujo general (Job Shop Scheduling, JSS) y flujo regular (Flow Shop Scheduling, FSS), que están presentes en un taller con tecnología de mecanizado con el objetivo de optimizar varias medidas de desempeño en un plan de trabajo. La aportación principal de esta tesis, es un modelo matemático para medir el consumo de energía, como criterio para la optimización, de las máquinas que intervienen en la ejecución de un plan de trabajo. Se propone además, un método para mejorar el rendimiento en la búsqueda de las soluciones encontradas, por parte del Algoritmo Genético Simple, basado en el aprovechamiento del tiempo ocioso. ABSTRACT The scheduling problems are very important in today's world. It can be said to be present in all the basics of modern industry, hence the importance that these are optimal, so that they can save resources that are associated with the problem. The appropriate programming jobs in manufacturing processes is an important problem that arises in production in many companies. The order in which they are processed, it is immaterial, but shall determine a parameter of interest, whose values agree optimize the possible. This may be affected the total cost of execution of work, the time needed to complete them or the stock of work in progress that will be generated. This leads directly to the problem of determining what the most appropriate order to carry out the work in order to maximize some of the above parameters or other similar. Due to the limitations of conventional optimization techniques, in this work present a metaheuristic based on a Simple Genetic Algorithm (Simple Genetic Algorithm, SGA) to solve programming problems overall flow rate (Job Shop Scheduling, JSS) and regular flow (Flow Shop Scheduling, FSS), which are present in a workshop with machining technology in order to optimize various performance measures in a plan. The main contribution of this thesis is a mathematical model to measure the energy consumption as a criterion for the optimization of the machines involved in the implementation of a work plan. It also proposes a method to improve performance in finding the solutions, by the simple genetic algorithm, based on the use of idle time.
Resumo:
Los arrays de ranuras son sistemas de antennas conocidos desde los años 40, principalmente destinados a formar parte de sistemas rádar de navíos de combate y grandes estaciones terrenas donde el tamaño y el peso no eran altamente restrictivos. Con el paso de los años y debido sobre todo a importantes avances en materiales y métodos de fabricación, el rango de aplicaciones de este tipo de sistemas radiantes creció en gran medida. Desde nuevas tecnologías biomédicas, sistemas anticolisión en automóviles y navegación en aviones, enlaces de comunicaciones de alta tasa binaria y corta distancia e incluso sistemas embarcados en satélites para la transmisión de señal de televisión. Dentro de esta familia de antennas, existen dos grupos que destacan por ser los más utilizados: las antennas de placas paralelas con las ranuras distribuidas de forma circular o espiral y las agrupaciones de arrays lineales construidos sobre guia de onda. Continuando con las tareas de investigación desarrolladas durante los últimos años en el Instituto de Tecnología de Tokyo y en el Grupo de Radiación de la Universidad Politécnica de Madrid, la totalidad de esta tesis se centra en este último grupo, aunque como se verá se separa en gran medida de las técnicas de diseño y metodologías convencionales. Los arrays de ranuras rectas y paralelas al eje de la guía rectangular que las alimenta son, sin ninguna duda, los modelos más empleados debido a la fiabilidad que presentan a altas frecuencias, su capacidad para gestionar grandes cantidades de potencia y la sencillez de su diseño y fabricación. Sin embargo, también presentan desventajas como estrecho ancho de banda en pérdidas de retorno y rápida degradación del diagrama de radiación con la frecuencia. Éstas son debidas a la naturaleza resonante de sus elementos radiantes: al perder la resonancia, el sistema global se desajusta y sus prestaciones degeneran. En arrays bidimensionales de slots rectos, el campo eléctrico queda polarizado sobre el plano transversal a las ranuras, correspondiéndose con el plano de altos lóbulos secundarios. Esta tesis tiene como objetivo el desarrollo de un método sistemático de diseño de arrays de ranuras inclinadas y desplazadas del centro (en lo sucesivo “ranuras compuestas”), definido en 1971 como uno de los desafíos a superar dentro del mundo del diseño de antennas. La técnica empleada se basa en el Método de los Momentos, la Teoría de Circuitos y la Teoría de Conexión Aleatoria de Matrices de Dispersión. Al tratarse de un método circuital, la primera parte de la tesis se corresponde con el estudio de la aplicabilidad de las redes equivalentes fundamentales, su capacidad para recrear fenómenos físicos de la ranura, las limitaciones y ventajas que presentan para caracterizar las diferentes configuraciones de slot compuesto. Se profundiza en las diferencias entre las redes en T y en ! y se condiciona la selección de una u otra dependiendo del tipo de elemento radiante. Una vez seleccionado el tipo de red a emplear en el diseño del sistema, se ha desarrollado un algoritmo de cascadeo progresivo desde el puerto alimentador hacia el cortocircuito que termina el modelo. Este algoritmo es independiente del número de elementos, la frecuencia central de funcionamiento, del ángulo de inclinación de las ranuras y de la red equivalente seleccionada (en T o en !). Se basa en definir el diseño del array como un Problema de Satisfacción de Condiciones (en inglés, Constraint Satisfaction Problem) que se resuelve por un método de Búsqueda en Retroceso (Backtracking algorithm). Como resultado devuelve un circuito equivalente del array completo adaptado a su entrada y cuyos elementos consumen una potencia acorde a una distribución de amplitud dada para el array. En toda agrupación de antennas, el acoplo mutuo entre elementos a través del campo radiado representa uno de los principales problemas para el ingeniero y sus efectos perjudican a las prestaciones globales del sistema, tanto en adaptación como en capacidad de radiación. El empleo de circuito equivalente se descartó por la dificultad que suponía la caracterización de estos efectos y su inclusión en la etapa de diseño. En esta tesis doctoral el acoplo también se ha modelado como una red equivalente cuyos elementos son transformadores ideales y admitancias, conectada al conjunto de redes equivalentes que representa el array. Al comparar los resultados estimados en términos de pérdidas de retorno y radiación con aquellos obtenidos a partir de programas comerciales populares como CST Microwave Studio se confirma la validez del método aquí propuesto, el primer método de diseño sistemático de arrays de ranuras compuestos alimentados por guía de onda rectangular. Al tratarse de ranuras no resonantes, el ancho de banda en pérdidas de retorno es mucho mas amplio que el que presentan arrays de slots rectos. Para arrays bidimensionales, el ángulo de inclinación puede ajustarse de manera que el campo quede polarizado en los planos de bajos lóbulos secundarios. Además de simulaciones se han diseñado, construido y medido dos prototipos centrados en la frecuencia de 12GHz, de seis y diez elementos. Las medidas de pérdidas de retorno y diagrama de radiación revelan excelentes resultados, certificando la bondad del método genuino Method of Moments - Forward Matching Procedure desarrollado a lo largo de esta tésis. Abstract The slot antenna arrays are well known systems from the decade of 40s, mainly intended to be part of radar systems of large warships and terrestrial stations where size and weight were not highly restrictive. Over the years, mainly due to significant advances in materials and manufacturing methods, the range of applications of this type of radiating systems grew significantly. From new biomedical technologies, collision avoidance systems in cars and aircraft navigation, short communication links with high bit transfer rate and even embedded systems in satellites for television broadcast. Within this family of antennas, two groups stand out as being the most frequent in the literature: parallel plate antennas with slots placed in a circular or spiral distribution and clusters of waveguide linear arrays. To continue the vast research work carried out during the last decades in the Tokyo Institute of Technology and in the Radiation Group at the Universidad Politécnica de Madrid, this thesis focuses on the latter group, although it represents a technique that drastically breaks with traditional design methodologies. The arrays of slots straight and parallel to the axis of the feeding rectangular waveguide are without a doubt the most used models because of the reliability that they present at high frequencies, its ability to handle large amounts of power and their simplicity of design and manufacturing. However, there also exist disadvantages as narrow bandwidth in return loss and rapid degradation of the radiation pattern with frequency. These are due to the resonant nature of radiating elements: away from the resonance status, the overall system performance and radiation pattern diminish. For two-dimensional arrays of straight slots, the electric field is polarized transverse to the radiators, corresponding to the plane of high side-lobe level. This thesis aims to develop a systematic method of designing arrays of angled and displaced slots (hereinafter "compound slots"), defined in 1971 as one of the challenges to overcome in the world of antenna design. The used technique is based on the Method of Moments, Circuit Theory and the Theory of Scattering Matrices Connection. Being a circuitry-based method, the first part of this dissertation corresponds to the study of the applicability of the basic equivalent networks, their ability to recreate the slot physical phenomena, their limitations and advantages presented to characterize different compound slot configurations. It delves into the differences of T and ! and determines the selection of the most suitable one depending on the type of radiating element. Once the type of network to be used in the system design is selected, a progressive algorithm called Forward Matching Procedure has been developed to connect the proper equivalent networks from the feeder port to shorted ending. This algorithm is independent of the number of elements, the central operating frequency, the angle of inclination of the slots and selected equivalent network (T or ! networks). It is based on the definition of the array design as a Constraint Satisfaction Problem, solved by means of a Backtracking Algorithm. As a result, the method returns an equivalent circuit of the whole array which is matched at its input port and whose elements consume a power according to a given amplitude distribution for the array. In any group of antennas, the mutual coupling between elements through the radiated field represents one of the biggest problems that the engineer faces and its effects are detrimental to the overall performance of the system, both in radiation capabilities and return loss. The employment of an equivalent circuit for the array design was discarded by some authors because of the difficulty involved in the characterization of the coupling effects and their inclusion in the design stage. In this thesis the coupling has also been modeled as an equivalent network whose elements are ideal transformers and admittances connected to the set of equivalent networks that represent the antennas of the array. By comparing the estimated results in terms of return loss and radiation with those obtained from popular commercial software as CST Microwave Studio, the validity of the proposed method is fully confirmed, representing the first method of systematic design of compound-slot arrays fed by rectangular waveguide. Since these slots do not work under the resonant status, the bandwidth in return loss is much wider than the longitudinal-slot arrays. For the case of two-dimensional arrays, the angle of inclination can be adjusted so that the field is polarized at the low side-lobe level plane. Besides the performed full-wave simulations two prototypes of six and ten elements for the X-band have been designed, built and measured, revealing excellent results and agreement with the expected results. These facts certify that the genuine technique Method of Moments - Matching Forward Procedure developed along this thesis is valid and trustable.
Resumo:
In recent decades, there has been an increasing interest in systems comprised of several autonomous mobile robots, and as a result, there has been a substantial amount of development in the eld of Articial Intelligence, especially in Robotics. There are several studies in the literature by some researchers from the scientic community that focus on the creation of intelligent machines and devices capable to imitate the functions and movements of living beings. Multi-Robot Systems (MRS) can often deal with tasks that are dicult, if not impossible, to be accomplished by a single robot. In the context of MRS, one of the main challenges is the need to control, coordinate and synchronize the operation of multiple robots to perform a specic task. This requires the development of new strategies and methods which allow us to obtain the desired system behavior in a formal and concise way. This PhD thesis aims to study the coordination of multi-robot systems, in particular, addresses the problem of the distribution of heterogeneous multi-tasks. The main interest in these systems is to understand how from simple rules inspired by the division of labor in social insects, a group of robots can perform tasks in an organized and coordinated way. We are mainly interested on truly distributed or decentralized solutions in which the robots themselves, autonomously and in an individual manner, select a particular task so that all tasks are optimally distributed. In general, to perform the multi-tasks distribution among a team of robots, they have to synchronize their actions and exchange information. Under this approach we can speak of multi-tasks selection instead of multi-tasks assignment, which means, that the agents or robots select the tasks instead of being assigned a task by a central controller. The key element in these algorithms is the estimation ix of the stimuli and the adaptive update of the thresholds. This means that each robot performs this estimate locally depending on the load or the number of pending tasks to be performed. In addition, it is very interesting the evaluation of the results in function in each approach, comparing the results obtained by the introducing noise in the number of pending loads, with the purpose of simulate the robot's error in estimating the real number of pending tasks. The main contribution of this thesis can be found in the approach based on self-organization and division of labor in social insects. An experimental scenario for the coordination problem among multiple robots, the robustness of the approaches and the generation of dynamic tasks have been presented and discussed. The particular issues studied are: Threshold models: It presents the experiments conducted to test the response threshold model with the objective to analyze the system performance index, for the problem of the distribution of heterogeneous multitasks in multi-robot systems; also has been introduced additive noise in the number of pending loads and has been generated dynamic tasks over time. Learning automata methods: It describes the experiments to test the learning automata-based probabilistic algorithms. The approach was tested to evaluate the system performance index with additive noise and with dynamic tasks generation for the same problem of the distribution of heterogeneous multi-tasks in multi-robot systems. Ant colony optimization: The goal of the experiments presented is to test the ant colony optimization-based deterministic algorithms, to achieve the distribution of heterogeneous multi-tasks in multi-robot systems. In the experiments performed, the system performance index is evaluated by introducing additive noise and dynamic tasks generation over time.
Resumo:
Over the last decade, Grid computing paved the way for a new level of large scale distributed systems. This infrastructure made it possible to securely and reliably take advantage of widely separated computational resources that are part of several different organizations. Resources can be incorporated to the Grid, building a theoretical virtual supercomputer. In time, cloud computing emerged as a new type of large scale distributed system, inheriting and expanding the expertise and knowledge that have been obtained so far. Some of the main characteristics of Grids naturally evolved into clouds, others were modified and adapted and others were simply discarded or postponed. Regardless of these technical specifics, both Grids and clouds together can be considered as one of the most important advances in large scale distributed computing of the past ten years; however, this step in distributed computing has came along with a completely new level of complexity. Grid and cloud management mechanisms play a key role, and correct analysis and understanding of the system behavior are needed. Large scale distributed systems must be able to self-manage, incorporating autonomic features capable of controlling and optimizing all resources and services. Traditional distributed computing management mechanisms analyze each resource separately and adjust specific parameters of each one of them. When trying to adapt the same procedures to Grid and cloud computing, the vast complexity of these systems can make this task extremely complicated. But large scale distributed systems complexity could only be a matter of perspective. It could be possible to understand the Grid or cloud behavior as a single entity, instead of a set of resources. This abstraction could provide a different understanding of the system, describing large scale behavior and global events that probably would not be detected analyzing each resource separately. In this work we define a theoretical framework that combines both ideas, multiple resources and single entity, to develop large scale distributed systems management techniques aimed at system performance optimization, increased dependability and Quality of Service (QoS). The resulting synergy could be the key 350 J. Montes et al. to address the most important difficulties of Grid and cloud management.
Resumo:
This paper presents a W-band high-resolution radar sensor for short-range applications. Low-cost technologies have been properly selected in order to implement a versatile and easily scalable radar system. A large operational bandwidth of 9 GHz, required for obtaining high-range resolution, is attained by means of a frequency multiplication-based architecture. The system characterization to identify the performance-limiting stages and the subsequent design optimization are presented. The assessment of system performance for several representative applications has been carried out.
Resumo:
El presente proyecto tiene como objeto caracterizar y optimizar un equipo de sonido profesional, entendiendo por “caracterizar” el determinar los atributos particulares de cada uno de los componentes integrados en el sistema, y entendiendo por “optimizar” el hallar la mejor manera de obtener una respuesta plana para todo el rango de frecuencias, libre de distorsión, y en la mayor área posible. El sistema de sonido utilizado pertenece a un grupo musical de directo, por lo que se instala y se configura en cada concierto en función de las características del recinto, sea cerrado o al aire libre. Con independencia de estas particularidades, el sistema completo se divide en dos formaciones, L y R (lado izquierdo y lado derecho del escenario), por lo que cada formación se compone de un procesador digital de la señal, cuatro etapas de amplificación, un sistema line array de ocho unidades, y un conjunto de ocho altavoces de subgraves. Para llevar a cabo el objetivo planteado, se ha dividido el proyecto en las fases que a continuación se describen. En primer lugar, se han realizado, en la cámara anecoica de la EUITT, las medidas que permiten obtener las características de cada uno de los elementos que componen el sistema. Estas medidas se han almacenado en formato ASCII. En segundo lugar, se ha diseñado una interfaz gráfica que permite, utilizando las medidas almacenadas, caracterizar tanto la respuesta individual de cada elemento de la cadena del sistema de sonido como la respuesta combinada de una unidad line array y una unidad de subgraves. La interfaz es interactiva, y tiene además la capacidad de entregar automáticamente los valores de configuración que permiten la optimización del conjunto. Esto es, obtener alineamiento en el rango de frecuencias compartido por ambas unidades. Las medidas realizadas en la cámara anecoica se han utilizado igualmente para modelar el sistema line array al completo y poder realizar simulaciones en campo libre utilizando programas de predicción acústica. Se ha experimentado con los valores de configuración que permiten el alineamiento de los elementos individuales y obtenidos a través de la interfaz desarrollada, para comprobar la validez de los mismos con la formación line array y subgraves al completo. Por otro lado, se han analizado los métodos de optimización de sistemas propuestos por profesionales reconocidos del medio con el objetivo de aplicarlos en un evento real. En la preparación y montaje del evento, se han aplicado los valores de configuración proporcionados por la interfaz, y se ha comprobado la validez de los mismos realizando medidas in situ según los criterios propuestos en los métodos de optimización estudiados. ABSTRACT. This project aims to characterize and optimize a professional sound system. Characterize must be understood as determining the particular attributes of each component integrated in the system; optimize must be understood as finding the best way to get a flat response for all the frequency range, distortion free, in the largest possible area. The sound system under test belongs to a live musical group, so it is setup and configured on each concert depending on the characteristics of the enclosure, whether it’s indoor or outdoor. Apart from these features, the whole system is divided into two clusters, L and R (left and right side of the stage), so that each one is provided with a digital signal processor, four amplification stages, an eight-units line array system, and a set of eight subwoofers . To accomplish the stated objective, the project has been divided into the steps described below. To begin with, measures have been realized in the anechoic chamber of EUITT, which make possible obtaining the characteristics of each of the elements of the system. These measures have been stored in ASCII format. Then, a graphical interface has been designed that allow, using the stored measurements and from graphics, to characterize both the individual response of each element of the string sound system and the combined response of the several elements. The interface is interactive, and also has the ability to automatically deliver the configuration settings that allow the whole optimization. That means to get alignment in the frequency range shared by a line array unit and a subwoofer unit. The measurements made in the anechoic chamber have also been used to model the complete line array system and to perform free-field simulations using acoustical prediction programs. Simulations have been done with the configuration settings that allow the individual elements alignment (provided by the graphical interface developed), in order to check their validity with the full line array and subwoofer systems. On the other hand, analysis about the optimization methods, proposed by renowned professionals of the field, has been made in order to apply them in a real concert. In the setup and assembly of the event, configuration settings provided by the interface have been applied. Their validity has been proved by making measures on-site according to the criteria set in the studied optimization methods.