869 resultados para Problem solving, control methods, and search
Resumo:
AIM To systematically assess the efficacy of patient-administered mechanical and/or chemical plaque control protocols in the management of peri-implant mucositis (PM). MATERIAL AND METHODS Randomized (RCTs) and Controlled Clinical Trials (CCTs) were identified through an electronic search of three databases complemented by manual search. Identification, screening, eligibility and inclusion of studies was performed independently by two reviewers. Studies without professional intervention or with only mechanical debridement professionally administered were included. Quality assessment was performed by means of the Cochrane Collaboration's tool for assessing risk of bias. RESULTS Eleven RCTs with a follow-up from 3 to 24months were included. Definition of PM was lacking or heterogeneously reported. Complete resolution of PM was not achieved in any study. One study reported 38% of patients with complete resolution of PM. Surrogate end-point outcomes of PM therapy were often reported. The choice of control interventions showed great variability. The efficacy of powered toothbrushes, a triclosan-containing toothpaste and adjunctive antiseptics remains to be established. High quality of methods and reporting was found in four studies. CONCLUSIONS Professionally- and patient-administered mechanical plaque control alone should be considered the standard of care in the management of PM. Therapy of PM is a prerequisite for the prevention of peri-implantitis.
Resumo:
Abstract The proliferation of wireless sensor networks and the variety of envisioned applications associated with them has motivated the development of distributed algorithms for collaborative processing over networked systems. One of the applications that has attracted the attention of the researchers is that of target localization where the nodes of the network try to estimate the position of an unknown target that lies within its coverage area. Particularly challenging is the problem of estimating the targets position when we use received signal strength indicator (RSSI) due to the nonlinear relationship between the measured signal and the true position of the target. Many of the existing approaches suffer either from high computational complexity (e.g., particle filters) or lack of accuracy. Further, many of the proposed solutions are centralized which make their application to a sensor network questionable. Depending on the application at hand and, from a practical perspective it could be convenient to find a balance between localization accuracy and complexity. Into this direction we approach the maximum likelihood location estimation problem by solving a suboptimal (and more tractable) problem. One of the main advantages of the proposed scheme is that it allows for a decentralized implementation using distributed processing tools (e.g., consensus and convex optimization) and therefore, it is very suitable to be implemented in real sensor networks. If further accuracy is needed an additional refinement step could be performed around the found solution. Under the assumption of independent noise among the nodes such local search can be done in a fully distributed way using a distributed version of the Gauss-Newton method based on consensus. Regardless of the underlying application or function of the sensor network it is always necessary to have a mechanism for data reporting. While some approaches use a special kind of nodes (called sink nodes) for data harvesting and forwarding to the outside world, there are however some scenarios where such an approach is impractical or even impossible to deploy. Further, such sink nodes become a bottleneck in terms of traffic flow and power consumption. To overcome these issues instead of using sink nodes for data reporting one could use collaborative beamforming techniques to forward directly the generated data to a base station or gateway to the outside world. In a dis-tributed environment like a sensor network nodes cooperate in order to form a virtual antenna array that can exploit the benefits of multi-antenna communications. In col-laborative beamforming nodes synchronize their phases in order to add constructively at the receiver. Some of the inconveniences associated with collaborative beamforming techniques is that there is no control over the radiation pattern since it is treated as a random quantity. This may cause interference to other coexisting systems and fast bat-tery depletion at the nodes. Since energy-efficiency is a major design issue we consider the development of a distributed collaborative beamforming scheme that maximizes the network lifetime while meeting some quality of service (QoS) requirement at the receiver side. Using local information about battery status and channel conditions we find distributed algorithms that converge to the optimal centralized beamformer. While in the first part we consider only battery depletion due to communications beamforming, we extend the model to account for more realistic scenarios by the introduction of an additional random energy consumption. It is shown how the new problem generalizes the original one and under which conditions it is easily solvable. By formulating the problem under the energy-efficiency perspective the networks lifetime is significantly improved. Resumen La proliferacin de las redes inalmbricas de sensores junto con la gran variedad de posibles aplicaciones relacionadas, han motivado el desarrollo de herramientas y algoritmos necesarios para el procesado cooperativo en sistemas distribuidos. Una de las aplicaciones que suscitado mayor inters entre la comunidad cientfica es la de localization, donde el conjunto de nodos de la red intenta estimar la posicin de un blanco localizado dentro de su rea de cobertura. El problema de la localization es especialmente desafiante cuando se usan niveles de energa de la seal recibida (RSSI por sus siglas en ingls) como medida para la localization. El principal inconveniente reside en el hecho que el nivel de seal recibida no sigue una relacin lineal con la posicin del blanco. Muchas de las soluciones actuales al problema de localization usando RSSI se basan en complejos esquemas centralizados como filtros de partculas, mientas que en otras se basan en esquemas mucho ms simples pero con menor precisin. Adems, en muchos casos las estrategias son centralizadas lo que resulta poco prcticos para su implementacin en redes de sensores. Desde un punto de vista prctico y de implementation, es conveniente, para ciertos escenarios y aplicaciones, el desarrollo de alternativas que ofrezcan un compromiso entre complejidad y precisin. En esta lnea, en lugar de abordar directamente el problema de la estimacin de la posicin del blanco bajo el criterio de mxima verosimilitud, proponemos usar una formulacin subptima del problema ms manejable analticamente y que ofrece la ventaja de permitir encontrar la solucin al problema de localization de una forma totalmente distribuida, convirtindola as en una solucin atractiva dentro del contexto de redes inalmbricas de sensores. Para ello, se usan herramientas de procesado distribuido como los algoritmos de consenso y de optimizacin convexa en sistemas distribuidos. Para aplicaciones donde se requiera de un mayor grado de precisin se propone una estrategia que consiste en la optimizacin local de la funcin de verosimilitud entorno a la estimacin inicialmente obtenida. Esta optimizacin se puede realizar de forma descentralizada usando una versin basada en consenso del mtodo de Gauss-Newton siempre y cuando asumamos independencia de los ruidos de medida en los diferentes nodos. Independientemente de la aplicacin subyacente de la red de sensores, es necesario tener un mecanismo que permita recopilar los datos provenientes de la red de sensores. Una forma de hacerlo es mediante el uso de uno o varios nodos especiales, llamados nodos sumidero, (sink en ingls) que acten como centros recolectores de informacin y que estarn equipados con hardware adicional que les permita la interaccin con el exterior de la red. La principal desventaja de esta estrategia es que dichos nodos se convierten en cuellos de botella en cuanto a trfico y capacidad de clculo. Como alternativa se pueden usar tcnicas cooperativas de conformacin de haz (beamforming en ingls) de manera que el conjunto de la red puede verse como un nico sistema virtual de mltiples antenas y, por tanto, que exploten los beneficios que ofrecen las comunicaciones con mltiples antenas. Para ello, los distintos nodos de la red sincronizan sus transmisiones de manera que se produce una interferencia constructiva en el receptor. No obstante, las actuales tcnicas se basan en resultados promedios y asintticos, cuando el nmero de nodos es muy grande. Para una configuracin especfica se pierde el control sobre el diagrama de radiacin causando posibles interferencias sobre sistemas coexistentes o gastando ms potencia de la requerida. La eficiencia energtica es una cuestin capital en las redes inalmbricas de sensores ya que los nodos estn equipados con bateras. Es por tanto muy importante preservar la batera evitando cambios innecesarios y el consecuente aumento de costes. Bajo estas consideraciones, se propone un esquema de conformacin de haz que maximice el tiempo de vida til de la red, entendiendo como tal el mximo tiempo que la red puede estar operativa garantizando unos requisitos de calidad de servicio (QoS por sus siglas en ingls) que permitan una decodificacin fiable de la seal recibida en la estacin base. Se proponen adems algoritmos distribuidos que convergen a la solucin centralizada. Inicialmente se considera que la nica causa de consumo energtico se debe a las comunicaciones con la estacin base. Este modelo de consumo energtico es modificado para tener en cuenta otras formas de consumo de energa derivadas de procesos inherentes al funcionamiento de la red como la adquisicin y procesado de datos, las comunicaciones locales entre nodos, etc. Dicho consumo adicional de energa se modela como una variable aleatoria en cada nodo. Se cambia por tanto, a un escenario probabilstico que generaliza el caso determinista y se proporcionan condiciones bajo las cuales el problema se puede resolver de forma eficiente. Se demuestra que el tiempo de vida de la red mejora de forma significativa usando el criterio propuesto de eficiencia energtica.
Resumo:
Many macroscopic properties: hardness, corrosion, catalytic activity, etc. are directly related to the surface structure, that is, to the position and chemical identity of the outermost atoms of the material. Current experimental techniques for its determination produce a signature from which the structure must be inferred by solving an inverse problem: a solution is proposed, its corresponding signature computed and then compared to the experiment. This is a challenging optimization problem where the search space and the number of local minima grows exponentially with the number of atoms, hence its solution cannot be achieved for arbitrarily large structures. Nowadays, it is solved by using a mixture of human knowledge and local search techniques: an expert proposes a solution that is refined using a local minimizer. If the outcome does not fit the experiment, a new solution must be proposed again. Solving a small surface can take from days to weeks of this trial and error method. Here we describe our ongoing work in its solution. We use an hybrid algorithm that mixes evolutionary techniques with trusted region methods and reuses knowledge gained during the execution to avoid repeated search of structures. Its parallelization produces good results even when not requiring the gathering of the full population, hence it can be used in loosely coupled environments such as grids. With this algorithm, the solution of test cases that previously took weeks of expert time can be automatically solved in a day or two of uniprocessor time.
Resumo:
A medida que se incrementa la energa de los aceleradores de partculas o iones pesados como el CERN o GSI, de los reactores de fusin como JET o ITER, u otros experimentos cientficos, se va haciendo cada vez ms imprescindible el uso de tcnicas de manipulacin remota para la interaccin con el entorno sujeto a la radiacin. Hasta ahora la tasa de dosis radioactiva en el CERN poda tomar valores cercanos a algunos mSv para tiempos de enfriamiento de horas, que permitan la intervencin humana para tareas de mantenimiento. Durante los primeros ensayos con plasma en JET, se alcanzaban valores cercanos a los 200 Sv despus de un tiempo de enfriamiento de 4 meses y ya se haca extensivo el uso de tcnicas de manipulacin remota. Hay una clara tendencia al incremento de los niveles de radioactividad en el futuro en este tipo de instalaciones. Un claro ejemplo es ITER, donde se esperan valores de 450 Sv/h en el centro del toroide a los 11 das de enfriamiento o los nuevos niveles energticos del CERN que harn necesario una apuesta por niveles de mantenimiento remotos. En estas circunstancias se enmarca esta tesis, que estudia un sistema de control bilateral basado en fuerza-posicin, tratando de evitar el uso de sensores de fuerza/par, cuyo contenido electrnico los hace especialmente sensitivos en estos ambientes. El contenido de este trabajo se centra en la teleoperacin de robots industriales, que debido a su reconocida solvencia y facilidad para ser adaptados a estos entornos, unido al bajo coste y alta disponibilidad, les convierte en una alternativa interesante para tareas de manipulacin remota frente a costosas soluciones a medida. En primer lugar se considera el problema cinemtico de teleoperacin maestro-esclavo de cinemtica disimilar y se desarrolla un mtodo general para la solucin del problema en el que se incluye el uso de fuerzas asistivas para guiar al operador. A continuacin se explican con detalle los experimentos realizados con un robot ABB y que muestran las dificultades encontradas y recomendaciones para solventarlas. Se concluye el estudio cinemtico con un mtodo para el encaje de espacios de trabajo entre maestro y esclavo disimilares. Posteriormente se mira hacia la dinmica, estudindose el modelado de robots con vistas a obtener un mtodo que permita estimar las fuerzas externas que actan sobre los mismos. Durante la caracterizacin del modelo dinmico, se realizan varios ensayos para tratar de encontrar un compromiso entre complejidad de clculo y error de estimacin. Tambin se dan las claves para modelar y caracterizar robots con estructura en forma de paralelogramo y se presenta la arquitectura de control deseada. Una vez obtenido el modelo completo del esclavo, se investigan diferentes alternativas que permitan una estimacin de fuerzas externas en tiempo real, minimizando las derivadas de la posicin para minimizar el ruido. Se comienza utilizando observadores clsicos del estado para ir evolucionando hasta llegar al desarrollo de un observador de tipo Luenberger-Sliding cuya implementacin es relativamente sencilla y sus resultados contundentes. Tambin se analiza el uso del observador propuesto durante un control bilateral simulado en el que se compara la realimentacin de fuerzas obtenida con las tcnicas clsicas basadas en error de posicin frente a un control basado en fuerza-posicin donde la fuerza es estimada y no medida. Se comprueba como la solucin propuesta da resultados comparables con las arquitecturas clsicas y sin embargo introduce una alternativa para la teleoperacin de robots industriales cuya teleoperacin en entornos radioactivos sera imposible de otra manera. Finalmente se analizan los problemas derivados de la aplicacin prctica de la teleoperacin en los escenarios mencionados anteriormente. Debido a las condiciones prohibitivas para todo equipo electrnico, los sistemas de control se deben colocar a gran distancia de los manipuladores, dando lugar a longitudes de cable de centenares de metros. En estas condiciones se crean sobretensiones en controladores basados en PWM que pueden ser destructivas para el sistema formado por control, cableado y actuador, y por tanto, han de ser eliminadas. En este trabajo se propone una solucin basada en un filtro LC comercial y se prueba de forma extensiva que su inclusin no produce efectos negativos sobre el control del actuador. ABSTRACT As the energy on the particle accelerators or heavy ion accelerators such as CERN or GSI, fusion reactors such as JET or ITER, or other scientific experiments is increased, it is becoming increasingly necessary to use remote handling techniques to interact with the remote and radioactive environment. So far, the dose rate at CERN could present values near several mSv for cooling times on the range of hours, which allowed human intervention for maintenance tasks. At JET, they measured values close to 200 Sv after a cooling time of 4 months and since then, the remote handling techniques became usual. There is a clear tendency to increase the radiation levels in the future. A clear example is ITER, where values of 450 Sv/h are expected in the centre of the torus after 11 days of cooling. Also, the new energetic levels of CERN are expected to lead to a more advanced remote handling means. In these circumstances this thesis is framed, studying a bilateral control system based on force-position, trying to avoid the use of force/torque sensors, whose electronic content makes them very sensitive in these environments. The contents of this work are focused on teleoperating industrial robots, which due its well-known reliability, easiness to be adapted to these environments, cost-effectiveness and high availability, are considered as an interesting alternative to expensive custom-made solutions for remote handling tasks. Firstly, the kinematic problem of teloperating master and slave with dissimilar kinematics is analysed and a new general approach for solving this issue is presented. The solution includes using assistive forces in order to guide the human operator. Coming up next, I explain with detail the experiments accomplished with an ABB robot that show the difficulties encountered and the proposed solutions. This section is concluded with a method to match the masters and slaves workspaces when they present dissimilar kinematics. Later on, the research studies the dynamics, with special focus on robot modelling with the purpose of obtaining a method that allows to estimate external forces acting on them. During the characterisation of the models parameters, a set of tests are performed in order to get to a compromise between computational complexity and estimation error. Key points for modelling and characterising robots with a parallelogram structure are also given, and the desired control architecture is presented. Once a complete model of the slave is obtained, different alternatives for external force estimation are review to be able to predict forces in real time, minimizing the position differentiation to minimize the estimation noise. The research starts by implementing classic state observers and then it evolves towards the use of Luenberger- Sliding observers whose implementation is relatively easy and the results are convincing. I also analyse the use of proposed observer during a simulated bilateral control on which the force feedback obtained with the classic techniques based on the position error is compared versus a control architecture based on force-position, where the force is estimated instead of measured. I t is checked how the proposed solution gives results comparable with the classical techniques and however introduces an alternative method for teleoperating industrial robots whose teleoperation in radioactive environments would have been impossible in a different way. Finally, the problems originated by the practical application of teleoperation in the before mentioned scenarios are analysed. Due the prohibitive conditions for every electronic equipment, the control systems should be placed far from the manipulators. This provokes that the power cables that fed the slaves devices can present lengths of hundreds of meters. In these circumstances, overvoltage waves are developed when implementing drives based on PWM technique. The occurrence of overvoltage is very dangerous for the system composed by drive, wiring and actuator, and has to be eliminated. During this work, a solution based on commercial LC filters is proposed and it is extensively proved that its inclusion does not introduce adverse effects into the actuators control.
Resumo:
This paper defines the 3D reconstruction problem as the process of reconstructing a 3D scene from numerous 2D visual images of that scene. It is well known that this problem is ill-posed, and numerous constraints and assumptions are used in 3D reconstruction algorithms in order to reduce the solution space. Unfortunately, most constraints only work in a certain range of situations and often constraints are built into the most fundamental methods (e.g. Area Based Matching assumes that all the pixels in the window belong to the same object). This paper presents a novel formulation of the 3D reconstruction problem, using a voxel framework and first order logic equations, which does not contain any additional constraints or assumptions. Solving this formulation for a set of input images gives all the possible solutions for that set, rather than picking a solution that is deemed most likely. Using this formulation, this paper studies the problem of uniqueness in 3D reconstruction and how the solution space changes for different configurations of input images. It is found that it is not possible to guarantee a unique solution, no matter how many images are taken of the scene, their orientation or even how much color variation is in the scene itself. Results of using the formulation to reconstruct a few small voxel spaces are also presented. They show that the number of solutions is extremely large for even very small voxel spaces (5 x 5 voxel space gives 10 to 10(7) solutions). This shows the need for constraints to reduce the solution space to a reasonable size. Finally, it is noted that because of the discrete nature of the formulation, the solution space size can be easily calculated, making the formulation a useful tool to numerically evaluate the usefulness of any constraints that are added.
Resumo:
Recent research has highlighted several job characteristics salient to employee well-being and behavior for which there are no adequate generally applicable measures. These include timing and method control, monitoring and problem-solving demand, and production responsibility. In this article, an attempt to develop measures of these constructs provided encouraging results. Confirmatory factor analyses applied to data from 2 samples of shop-floor employees showed a consistent fit to a common 5-factor measurement model. Scales corresponding to each of the dimensions showed satisfactory internal and testretest reliabilities. As expected, the scales also discriminated between employees in different jobs and employees working with contrasting technologies.
Resumo:
In this paper it is explained how to solve a fully connected N-City travelling salesman problem (TSP) using a genetic algorithm. A crossover operator to use in the simulation of a genetic algorithm (GA) with DNA is presented. The aim of the paper is to follow the path of creating a new computational model based on DNA molecules and genetic operations. This paper solves the problem of exponentially size algorithms in DNA computing by using biological methods and techniques. After individual encoding and fitness evaluation, a protocol of the next step in a GA, crossover, is needed. This paper also shows how to make the GA faster via different populations of possible solutions.
Resumo:
Defining 'effectiveness' in the context of community mental health teams (CMHTs) has become increasingly difficult under the current pattern of provision required in National Health Service mental health services in England. The aim of this study was to establish the characteristics of multi-professional team working effectiveness in adult CMHTs to develop a new measure of CMHT effectiveness. The study was conducted between May and November 2010 and comprised two stages. Stage 1 used a formative evaluative approach based on the Productivity Measurement and Enhancement System to develop the scale with multiple stakeholder groups over a series of qualitative workshops held in various locations across England. Stage 2 analysed responses from a cross-sectional survey of 1500 members in 135 CMHTs from 11 Mental Health Trusts in England to determine the scale's psychometric properties. Based on an analysis of its structural validity and reliability, the resultant 20-item scale demonstrated good psychometric properties and captured one overall latent factor of CMHT effectiveness comprising seven dimensions: improved service user well-being, creative problem-solving, continuous care, inter-team working, respect between professionals, engagement with carers and therapeutic relationships with service users. The scale will be of significant value to CMHTs and healthcare commissioners both nationally and internationally for monitoring, evaluating and improving team functioning in practice.
Resumo:
Systems analysis (SA) is widely used in complex and vague problem solving. Initial stages of SA are analysis of problems and purposes to obtain problems/purposes of smaller complexity and vagueness that are combined into hierarchical structures of problems(SP)/purposes(PS). Managers have to be sure the PS and the purpose realizing system (PRS) that can achieve the PS-purposes are adequate to the problem to be solved. However, usually SP/PS are not substantiated well enough, because their development is based on a collective expertise in which logic of natural language and expert estimation methods are used. That is why scientific foundations of SA are not supposed to have been completely formed. The structure-and-purpose approach to SA based on a logic-and-linguistic simulation of problems/purposes analysis is a step towards formalization of the initial stages of SA to improve adequacy of their results, and also towards increasing quality of SA as a whole. Managers of industrial organizing systems using the approach eliminate logical errors in SP/PS at early stages of planning and so they will be able to find better decisions of complex and vague problems.
Resumo:
The long term goal of the work described is to contribute to the emerging literature of prevention science in general, and to school-based psychoeducational interventions in particular. The psychoeducational intervention reported in this study used a main effects prevention intervention model. The current study focused on promoting optimal cognitive and affective functioning. The goal of this intervention was to increase potential protective factors such as critical cognitive and communicative competencies (e.g., critical problem solving and decision making) and affective competencies (e.g., personal control and responsibility) in middle adolescents who have been identified by the school system as being at-risk for problem behaviors. The current psychoeducational intervention draws on an ongoing program of theory and research (Berman, Berman, Cass Lorente, Ferrer Wreder, Arrufat, & Kurtines 1996; Ferrer Wreder, 1996; Kurtines, Berman, Ittel, & Williamson, 1995) and extends it to include Freire's (1970) concept of transformative pedagogy in developing school-based psychoeducational programs that target troubled adolescents. The results of the quantitative and qualitative analyses indicated trends that were generally encouraging with respect to the effects of the intervention on increasing critical cognitive and affective competencies. ^
Resumo:
The majority of research work carried out in the field of Operations-Research uses methods and algorithms to optimize the pick-up and delivery problem. Most studies aim to solve the vehicle routing problem, to accommodate optimum delivery orders, vehicles etc. This paper focuses on green logistics approach, where existing Public Transport infrastructure capability of a city is used for the delivery of small and medium sized packaged goods thus, helping improve the situation of urban congestion and greenhouse gas emissions reduction. It carried out a study to investigate the feasibility of the proposed multi-agent based simulation model, for efficiency of cost, time and energy consumption. Multimodal Dijkstra Shortest Path algorithm and Nested Monte Carlo Search have been employed for a two-phase algorithmic approach used for generation of time based cost matrix. The quality of the tour is dependent on the efficiency of the search algorithm implemented for plan generation and route planning. The results reveal a definite advantage of using Public Transportation over existing delivery approaches in terms of energy efficiency.
Resumo:
International audience
Resumo:
The horticultural sector has become an increasingly important sector of food production, for which greenhouse climate control plays a vital role in improving its sustainability. One of the methods to control the greenhouse climate is Model Predictive Control, which can be optimized through a branch and bound algorithm. The application of the algorithm in literature is examined and analyzed through small examples, and later extended to greenhouse climate simulation. A comparison is made of various alternative objective functions available in literature. Subsequently, a modidified version of the B&B algorithm is presented, which reduces the number of node evaluations required for optimization. Finally, three alternative algorithms are developed and compared to consider the optimization problem from a discrete to a continuous control space.
Resumo:
The idea of spacecraft formations, flying in tight configurations with maximum baselines of a few hundred meters in low-Earth orbits, has generated widespread interest over the last several years. Nevertheless, controlling the movement of spacecraft in formation poses difficulties, such as in-orbit high-computing demand and collision avoidance capabilities, which escalate as the number of units in the formation is increased and complicated nonlinear effects are imposed to the dynamics, together with uncertainty which may arise from the lack of knowledge of system parameters. These requirements have led to the need of reliable linear and nonlinear controllers in terms of relative and absolute dynamics. The objective of this thesis is, therefore, to introduce new control methods to allow spacecraft in formation, with circular/elliptical reference orbits, to efficiently execute safe autonomous manoeuvres. These controllers distinguish from the bulk of literature in that they merge guidance laws never applied before to spacecraft formation flying and collision avoidance capacities into a single control strategy. For this purpose, three control schemes are presented: linear optimal regulation, linear optimal estimation and adaptive nonlinear control. In general terms, the proposed control approaches command the dynamical performance of one or several followers with respect to a leader to asymptotically track a time-varying nominal trajectory (TVNT), while the threat of collision between the followers is reduced by repelling accelerations obtained from the collision avoidance scheme during the periods of closest proximity. Linear optimal regulation is achieved through a Riccati-based tracking controller. Within this control strategy, the controller provides guidance and tracking toward a desired TVNT, optimizing fuel consumption by Riccati procedure using a non-infinite cost function defined in terms of the desired TVNT, while repelling accelerations generated from the CAS will ensure evasive actions between the elements of the formation. The relative dynamics model, suitable for circular and eccentric low-Earth reference orbits, is based on the Tschauner and Hempel equations, and includes a control input and a nonlinear term corresponding to the CAS repelling accelerations. Linear optimal estimation is built on the forward-in-time separation principle. This controller encompasses two stages: regulation and estimation. The first stage requires the design of a full state feedback controller using the state vector reconstructed by means of the estimator. The second stage requires the design of an additional dynamical system, the estimator, to obtain the states which cannot be measured in order to approximately reconstruct the full state vector. Then, the separation principle states that an observer built for a known input can also be used to estimate the state of the system and to generate the control input. This allows the design of the observer and the feedback independently, by exploiting the advantages of linear quadratic regulator theory, in order to estimate the states of a dynamical system with model and sensor uncertainty. The relative dynamics is described with the linear system used in the previous controller, with a control input and nonlinearities entering via the repelling accelerations from the CAS during collision avoidance events. Moreover, sensor uncertainty is added to the control process by considering carrier-phase differential GPS (CDGPS) velocity measurement error. An adaptive control law capable of delivering superior closed-loop performance when compared to the certainty-equivalence (CE) adaptive controllers is finally presented. A novel noncertainty-equivalence controller based on the Immersion and Invariance paradigm for close-manoeuvring spacecraft formation flying in both circular and elliptical low-Earth reference orbits is introduced. The proposed control scheme achieves stabilization by immersing the plant dynamics into a target dynamical system (or manifold) that captures the desired dynamical behaviour. They key feature of this methodology is the addition of a new term to the classical certainty-equivalence control approach that, in conjunction with the parameter update law, is designed to achieve adaptive stabilization. This parameter has the ultimate task of shaping the manifold into which the adaptive system is immersed. The performance of the controller is proven stable via a Lyapunov-based analysis and Barbalats lemma. In order to evaluate the design of the controllers, test cases based on the physical and orbital features of the Prototype Research Instruments and Space Mission Technology Advancement (PRISMA) are implemented, extending the number of elements in the formation into scenarios with reconfigurations and on-orbit position switching in elliptical low-Earth reference orbits. An extensive analysis and comparison of the performance of the controllers in terms of total v and fuel consumption, with and without the effects of the CAS, is presented. These results show that the three proposed controllers allow the followers to asymptotically track the desired nominal trajectory and, additionally, those simulations including CAS show an effective decrease of collision risk during the performance of the manoeuvre.
Resumo:
The world of Computational Biology and Bioinformatics presently integrates many different expertise, including computer science and electronic engineering. A major aim in Data Science is the development and tuning of specific computational approaches to interpret the complexity of Biology. Molecular biologists and medical doctors heavily rely on an interdisciplinary expert capable of understanding the biological background to apply algorithms for finding optimal solutions to their problems. With this problem-solving orientation, I was involved in two basic research fields: Cancer Genomics and Enzyme Proteomics. For this reason, what I developed and implemented can be considered a general effort to help data analysis both in Cancer Genomics and in Enzyme Proteomics, focusing on enzymes which catalyse all the biochemical reactions in cells. Specifically, as to Cancer Genomics I contributed to the characterization of intratumoral immune microenvironment in gastrointestinal stromal tumours (GISTs) correlating immune cell population levels with tumour subtypes. I was involved in the setup of strategies for the evaluation and standardization of different approaches for fusion transcript detection in sarcomas that can be applied in routine diagnostic. This was part of a coordinated effort of the Sarcoma working group of "Alleanza Contro il Cancro". As to Enzyme Proteomics, I generated a derived database collecting all the human proteins and enzymes which are known to be associated to genetic disease. I curated the data search in freely available databases such as PDB, UniProt, Humsavar, Clinvar and I was responsible of searching, updating, and handling the information content, and computing statistics. I also developed a web server, BENZ, which allows researchers to annotate an enzyme sequence with the corresponding Enzyme Commission number, the important feature fully describing the catalysed reaction. More to this, I greatly contributed to the characterization of the enzyme-genetic disease association, for a better classification of the metabolic genetic diseases.