953 resultados para Power Line Detection
Resumo:
BACKGROUND Long-term hormone therapy has been the standard of care for advanced prostate cancer since the 1940s. STAMPEDE is a randomised controlled trial using a multiarm, multistage platform design. It recruits men with high-risk, locally advanced, metastatic or recurrent prostate cancer who are starting first-line long-term hormone therapy. We report primary survival results for three research comparisons testing the addition of zoledronic acid, docetaxel, or their combination to standard of care versus standard of care alone. METHODS Standard of care was hormone therapy for at least 2 years; radiotherapy was encouraged for men with N0M0 disease to November, 2011, then mandated; radiotherapy was optional for men with node-positive non-metastatic (N+M0) disease. Stratified randomisation (via minimisation) allocated men 2:1:1:1 to standard of care only (SOC-only; control), standard of care plus zoledronic acid (SOC + ZA), standard of care plus docetaxel (SOC + Doc), or standard of care with both zoledronic acid and docetaxel (SOC + ZA + Doc). Zoledronic acid (4 mg) was given for six 3-weekly cycles, then 4-weekly until 2 years, and docetaxel (75 mg/m(2)) for six 3-weekly cycles with prednisolone 10 mg daily. There was no blinding to treatment allocation. The primary outcome measure was overall survival. Pairwise comparisons of research versus control had 90% power at 2·5% one-sided α for hazard ratio (HR) 0·75, requiring roughly 400 control arm deaths. Statistical analyses were undertaken with standard log-rank-type methods for time-to-event data, with hazard ratios (HRs) and 95% CIs derived from adjusted Cox models. This trial is registered at ClinicalTrials.gov (NCT00268476) and ControlledTrials.com (ISRCTN78818544). FINDINGS 2962 men were randomly assigned to four groups between Oct 5, 2005, and March 31, 2013. Median age was 65 years (IQR 60-71). 1817 (61%) men had M+ disease, 448 (15%) had N+/X M0, and 697 (24%) had N0M0. 165 (6%) men were previously treated with local therapy, and median prostate-specific antigen was 65 ng/mL (IQR 23-184). Median follow-up was 43 months (IQR 30-60). There were 415 deaths in the control group (347 [84%] prostate cancer). Median overall survival was 71 months (IQR 32 to not reached) for SOC-only, not reached (32 to not reached) for SOC + ZA (HR 0·94, 95% CI 0·79-1·11; p=0·450), 81 months (41 to not reached) for SOC + Doc (0·78, 0·66-0·93; p=0·006), and 76 months (39 to not reached) for SOC + ZA + Doc (0·82, 0·69-0·97; p=0·022). There was no evidence of heterogeneity in treatment effect (for any of the treatments) across prespecified subsets. Grade 3-5 adverse events were reported for 399 (32%) patients receiving SOC, 197 (32%) receiving SOC + ZA, 288 (52%) receiving SOC + Doc, and 269 (52%) receiving SOC + ZA + Doc. INTERPRETATION Zoledronic acid showed no evidence of survival improvement and should not be part of standard of care for this population. Docetaxel chemotherapy, given at the time of long-term hormone therapy initiation, showed evidence of improved survival accompanied by an increase in adverse events. Docetaxel treatment should become part of standard of care for adequately fit men commencing long-term hormone therapy. FUNDING Cancer Research UK, Medical Research Council, Novartis, Sanofi-Aventis, Pfizer, Janssen, Astellas, NIHR Clinical Research Network, Swiss Group for Clinical Cancer Research.
Resumo:
BACKGROUND Chemotherapy plus bevacizumab is a standard option for first-line treatment in metastatic colorectal cancer (mCRC) patients. We assessed whether no continuation is non-inferior to continuation of bevacizumab after completing first-line chemotherapy. PATIENTS AND METHODS In an open-label, phase III multicentre trial, patients with mCRC without disease progression after 4-6 months of standard first-line chemotherapy plus bevacizumab were randomly assigned to continuing bevacizumab at a standard dose or no treatment. CT scans were done every 6 weeks until disease progression. The primary end point was time to progression (TTP). A non-inferiority limit for hazard ratio (HR) of 0.727 was chosen to detect a difference in TTP of 6 weeks or less, with a one-sided significance level of 10% and a statistical power of 85%. RESULTS The intention-to-treat population comprised 262 patients: median follow-up was 36.7 months. The median TTP was 4.1 [95% confidence interval (CI) 3.1-5.4] months for bevacizumab continuation versus 2.9 (95% CI 2.8-3.8) months for no continuation; HR 0.74 (95% CI 0.58-0.96). Non-inferiority could not be demonstrated. The median overall survival was 25.4 months for bevacizumab continuation versus 23.8 months (HR 0.83; 95% CI 0.63-1.1; P = 0.2) for no continuation. Severe adverse events were uncommon in the bevacizumab continuation arm. Costs for bevacizumab continuation were estimated to be ∼30,000 USD per patient. CONCLUSIONS Non-inferiority could not be demonstrated for treatment holidays versus continuing bevacizumab monotheray, after 4-6 months of standard first-line chemotherapy plus bevacizumab. Based on no impact on overall survival and increased treatment costs, bevacizumab as a single agent is of no meaningful therapeutic value. More efficient treatment approaches are needed to maintain control of stabilized disease following induction therapy. CLINICAL TRIAL REGISTRATION ClinicalTrials.gov, number NCT00544700.
Resumo:
The article approaches an understanding of power within strategy formation processes established by verbal and bodily communication. On this note, we examined concepts of power constituted by hierarchy and developed a conceptual framework for a performative interpretation of power. In line with Austin’s (1962) and Butler’s (1990, 1993, 1997) concept of performativity as well as strategy-as-practice research (Balogun et al., 2007; Jarzabkowski & Spee, 2009) we ask: How is persuasion achieved by strategic actors during strategy formation processes? To explore verbal and bodily communication empirically we developed an experimental setting in a small high-tech company located in Germany in December, 2012. The Results indicate that (1) during critical incidents – when perspectives clash – actors use arguments to gain persuasion. (2) The data illustrates that independently of their hierarchical position within the company, strategic actors show an equal distribution of argumentative techniques.
Resumo:
High-energy e(-) and pi(-) were measured by the multichannel plate (MCP) detector at the PiM1 beam line of the High Intensity Proton Accelerator Facilities located at the Paul Scherrer Institute, Villigen, Switzerland. The measurements provide the absolute detection efficiencies for these particles: 5.8% +/- 0.5% for electrons in the beam momenta range 17.5-300 MeV/c and 6.0% +/- 1.3% for pions in the beam momenta range 172-345 MeV/c. The pulse height distribution determined from the measurements is close to an exponential function with negative exponent, indicating that the particles penetrated the MCP material before producing the signal somewhere inside the channel. Low charge extraction and nominal gains of the MCP detector observed in this study are consistent with the proposed mechanism of the signal formation by penetrating radiation. A very similar MCP ion detector will be used in the Neutral Ion Mass (NIM) spectrometer designed for the JUICE mission of European Space Agency (ESA) to the Jupiter system, to perform measurements of the chemical composition of the Galilean moon exospheres. The detection efficiency for penetrating radiation determined in the present studies is important for the optimisation of the radiation shielding of the NIM detector against the high-rate and high-energy electrons trapped in Jupiter's magnetic field. Furthermore, the current studies indicate that MCP detectors can be useful to measure high-energy particle beams at high temporal resolution. (C) 2015 AIP Publishing LLC.
Resumo:
On the orbiter of the Rosetta spacecraft, the Cometary Secondary Ion Mass Analyser (COSIMA) will provide new in situ insights about the chemical composition of cometary grains all along 67P/Churyumov–Gerasimenko (67P/CG) journey until the end of December 2015 nominally. The aim of this paper is to present the pre-calibration which has already been performed as well as the different methods which have been developed in order to facilitate the interpretation of the COSIMA mass spectra and more especially of their organic content. The first step was to establish a mass spectra library in positive and negative ion mode of targeted molecules and to determine the specific features of each compound and chemical family analyzed. As the exact nature of the refractory cometary organic matter is nowadays unknown, this library is obviously not exhaustive. Therefore this library has also been the starting point for the research of indicators, which enable to highlight the presence of compounds containing specific atom or structure. These indicators correspond to the intensity ratio of specific peaks in the mass spectrum. They have allowed us to identify sample containing nitrogen atom, aliphatic chains or those containing polyaromatic hydrocarbons. From these indicators, a preliminary calibration line, from which the N/C ratio could be derived, has also been established. The research of specific mass difference could also be helpful to identify peaks related to quasi-molecular ions in an unknown mass spectrum. The Bayesian Positive Source Separation (BPSS) technique will also be very helpful for data analysis. This work is the starting point for the analysis of the cometary refractory organic matter. Nevertheless, calibration work will continue in order to reach the best possible interpretation of the COSIMA observations.
Resumo:
A particle accelerator is any device that, using electromagnetic fields, is able to communicate energy to charged particles (typically electrons or ionized atoms), accelerating and/or energizing them up to the required level for its purpose. The applications of particle accelerators are countless, beginning in a common TV CRT, passing through medical X-ray devices, and ending in large ion colliders utilized to find the smallest details of the matter. Among the other engineering applications, the ion implantation devices to obtain better semiconductors and materials of amazing properties are included. Materials supporting irradiation for future nuclear fusion plants are also benefited from particle accelerators. There are many devices in a particle accelerator required for its correct operation. The most important are the particle sources, the guiding, focalizing and correcting magnets, the radiofrequency accelerating cavities, the fast deflection devices, the beam diagnostic mechanisms and the particle detectors. Most of the fast particle deflection devices have been built historically by using copper coils and ferrite cores which could effectuate a relatively fast magnetic deflection, but needed large voltages and currents to counteract the high coil inductance in a response in the microseconds range. Various beam stability considerations and the new range of energies and sizes of present time accelerators and their rings require new devices featuring an improved wakefield behaviour and faster response (in the nanoseconds range). This can only be achieved by an electromagnetic deflection device based on a transmission line. The electromagnetic deflection device (strip-line kicker) produces a transverse displacement on the particle beam travelling close to the speed of light, in order to extract the particles to another experiment or to inject them into a different accelerator. The deflection is carried out by the means of two short, opposite phase pulses. The diversion of the particles is exerted by the integrated Lorentz force of the electromagnetic field travelling along the kicker. This Thesis deals with a detailed calculation, manufacturing and test methodology for strip-line kicker devices. The methodology is then applied to two real cases which are fully designed, built, tested and finally installed in the CTF3 accelerator facility at CERN (Geneva). Analytical and numerical calculations, both in 2D and 3D, are detailed starting from the basic specifications in order to obtain a conceptual design. Time domain and frequency domain calculations are developed in the process using different FDM and FEM codes. The following concepts among others are analyzed: scattering parameters, resonating high order modes, the wakefields, etc. Several contributions are presented in the calculation process dealing specifically with strip-line kicker devices fed by electromagnetic pulses. Materials and components typically used for the fabrication of these devices are analyzed in the manufacturing section. Mechanical supports and connexions of electrodes are also detailed, presenting some interesting contributions on these concepts. The electromagnetic and vacuum tests are then analyzed. These tests are required to ensure that the manufactured devices fulfil the specifications. Finally, and only from the analytical point of view, the strip-line kickers are studied together with a pulsed power supply based on solid state power switches (MOSFETs). The solid state technology applied to pulsed power supplies is introduced and several circuit topologies are modelled and simulated to obtain fast and good flat-top pulses.
Resumo:
Oxygen 1s excitation and ionization processes in the CO2 molecule have been studied with dispersed and non-dispersed fluorescence spectroscopy as well as with the vacuum ultraviolet (VUV) photon?photoion coincidence technique. The intensity of the neutral O emission line at 845 nm shows particular sensitivity to core-to-Rydberg excitations and core?valence double excitations, while shape resonances are suppressed. In contrast, the partial fluorescence yield in the wavelength window 300?650 nm and the excitation functions of selected O+ and C+ emission lines in the wavelength range 400?500 nm display all of the absorption features. The relative intensity of ionic emission in the visible range increases towards higher photon energies, which is attributed to O 1s shake-off photoionization. VUV photon?photoion coincidence spectra reveal major contributions from the C+ and O+ ions and a minor contribution from C2+. No conclusive changes in the intensity ratios among the different ions are observed above the O 1s threshold. The line shape of the VUV?O+ coincidence peak in the mass spectrum carries some information on the initial core excitation
Resumo:
In this paper we present an adaptive multi-camera system for real time object detection able to efficiently adjust the computational requirements of video processing blocks to the available processing power and the activity of the scene. The system is based on a two level adaptation strategy that works at local and at global level. Object detection is based on a Gaussian mixtures model background subtraction algorithm. Results show that the system can efficiently adapt the algorithm parameters without a significant loss in the detection accuracy.
Resumo:
El interés cada vez mayor por las redes de sensores inalámbricos pueden ser entendido simplemente pensando en lo que esencialmente son: un gran número de pequeños nodos sensores autoalimentados que recogen información o detectan eventos especiales y se comunican de manera inalámbrica, con el objetivo final de entregar sus datos procesados a una estación base. Los nodos sensores están densamente desplegados dentro del área de interés, se pueden desplegar al azar y tienen capacidad de cooperación. Por lo general, estos dispositivos son pequeños y de bajo costo, de modo que pueden ser producidos y desplegados en gran numero aunque sus recursos en términos de energía, memoria, velocidad de cálculo y ancho de banda están enormemente limitados. Detección, tratamiento y comunicación son tres elementos clave cuya combinación en un pequeño dispositivo permite lograr un gran número de aplicaciones. Las redes de sensores proporcionan oportunidades sin fin, pero al mismo tiempo plantean retos formidables, tales como lograr el máximo rendimiento de una energía que es escasa y por lo general un recurso no renovable. Sin embargo, los recientes avances en la integración a gran escala, integrado de hardware de computación, comunicaciones, y en general, la convergencia de la informática y las comunicaciones, están haciendo de esta tecnología emergente una realidad. Del mismo modo, los avances en la nanotecnología están empezando a hacer que todo gire entorno a las redes de pequeños sensores y actuadores distribuidos. Hay diferentes tipos de sensores tales como sensores de presión, acelerómetros, cámaras, sensores térmicos o un simple micrófono. Supervisan las condiciones presentes en diferentes lugares tales como la temperatura, humedad, el movimiento, la luminosidad, presión, composición del suelo, los niveles de ruido, la presencia o ausencia de ciertos tipos de objetos, los niveles de tensión mecánica sobre objetos adheridos y las características momentáneas tales como la velocidad , la dirección y el tamaño de un objeto, etc. Se comprobara el estado de las Redes Inalámbricas de Sensores y se revisaran los protocolos más famosos. Así mismo, se examinara la identificación por radiofrecuencia (RFID) ya que se está convirtiendo en algo actual y su presencia importante. La RFID tiene un papel crucial que desempeñar en el futuro en el mundo de los negocios y los individuos por igual. El impacto mundial que ha tenido la identificación sin cables está ejerciendo fuertes presiones en la tecnología RFID, los servicios de investigación y desarrollo, desarrollo de normas, el cumplimiento de la seguridad y la privacidad y muchos más. Su potencial económico se ha demostrado en algunos países mientras que otros están simplemente en etapas de planificación o en etapas piloto, pero aun tiene que afianzarse o desarrollarse a través de la modernización de los modelos de negocio y aplicaciones para poder tener un mayor impacto en la sociedad. Las posibles aplicaciones de redes de sensores son de interés para la mayoría de campos. La monitorización ambiental, la guerra, la educación infantil, la vigilancia, la micro-cirugía y la agricultura son solo unos pocos ejemplos de los muchísimos campos en los que tienen cabida las redes mencionadas anteriormente. Estados Unidos de América es probablemente el país que más ha investigado en esta área por lo que veremos muchas soluciones propuestas provenientes de ese país. Universidades como Berkeley, UCLA (Universidad de California, Los Ángeles) Harvard y empresas como Intel lideran dichas investigaciones. Pero no solo EE.UU. usa e investiga las redes de sensores inalámbricos. La Universidad de Southampton, por ejemplo, está desarrollando una tecnología para monitorear el comportamiento de los glaciares mediante redes de sensores que contribuyen a la investigación fundamental en glaciología y de las redes de sensores inalámbricos. Así mismo, Coalesenses GmbH (Alemania) y Zurich ETH están trabajando en diversas aplicaciones para redes de sensores inalámbricos en numerosas áreas. Una solución española será la elegida para ser examinada más a fondo por ser innovadora, adaptable y polivalente. Este estudio del sensor se ha centrado principalmente en aplicaciones de tráfico, pero no se puede olvidar la lista de más de 50 aplicaciones diferentes que ha sido publicada por la firma creadora de este sensor específico. En la actualidad hay muchas tecnologías de vigilancia de vehículos, incluidos los sensores de bucle, cámaras de video, sensores de imagen, sensores infrarrojos, radares de microondas, GPS, etc. El rendimiento es aceptable, pero no suficiente, debido a su limitada cobertura y caros costos de implementación y mantenimiento, especialmente este ultimo. Tienen defectos tales como: línea de visión, baja exactitud, dependen mucho del ambiente y del clima, no se puede realizar trabajos de mantenimiento sin interrumpir las mediciones, la noche puede condicionar muchos de ellos, tienen altos costos de instalación y mantenimiento, etc. Por consiguiente, en las aplicaciones reales de circulación, los datos recibidos son insuficientes o malos en términos de tiempo real debido al escaso número de detectores y su costo. Con el aumento de vehículos en las redes viales urbanas las tecnologías de detección de vehículos se enfrentan a nuevas exigencias. Las redes de sensores inalámbricos son actualmente una de las tecnologías más avanzadas y una revolución en la detección de información remota y en las aplicaciones de recogida. Las perspectivas de aplicación en el sistema inteligente de transporte son muy amplias. Con este fin se ha desarrollado un programa de localización de objetivos y recuento utilizando una red de sensores binarios. Esto permite que el sensor necesite mucha menos energía durante la transmisión de información y que los dispositivos sean más independientes con el fin de tener un mejor control de tráfico. La aplicación se centra en la eficacia de la colaboración de los sensores en el seguimiento más que en los protocolos de comunicación utilizados por los nodos sensores. Las operaciones de salida y retorno en las vacaciones son un buen ejemplo de por qué es necesario llevar la cuenta de los coches en las carreteras. Para ello se ha desarrollado una simulación en Matlab con el objetivo localizar objetivos y contarlos con una red de sensores binarios. Dicho programa se podría implementar en el sensor que Libelium, la empresa creadora del sensor que se examinara concienzudamente, ha desarrollado. Esto permitiría que el aparato necesitase mucha menos energía durante la transmisión de información y los dispositivos sean más independientes. Los prometedores resultados obtenidos indican que los sensores de proximidad binarios pueden formar la base de una arquitectura robusta para la vigilancia de áreas amplias y para el seguimiento de objetivos. Cuando el movimiento de dichos objetivos es suficientemente suave, no tiene cambios bruscos de trayectoria, el algoritmo ClusterTrack proporciona un rendimiento excelente en términos de identificación y seguimiento de trayectorias los objetos designados como blancos. Este algoritmo podría, por supuesto, ser utilizado para numerosas aplicaciones y se podría seguir esta línea de trabajo para futuras investigaciones. No es sorprendente que las redes de sensores de binarios de proximidad hayan atraído mucha atención últimamente ya que, a pesar de la información mínima de un sensor de proximidad binario proporciona, las redes de este tipo pueden realizar un seguimiento de todo tipo de objetivos con la precisión suficiente. Abstract The increasing interest in wireless sensor networks can be promptly understood simply by thinking about what they essentially are: a large number of small sensing self-powered nodes which gather information or detect special events and communicate in a wireless fashion, with the end goal of handing their processed data to a base station. The sensor nodes are densely deployed inside the phenomenon, they deploy random and have cooperative capabilities. Usually these devices are small and inexpensive, so that they can be produced and deployed in large numbers, and so their resources in terms of energy, memory, computational speed and bandwidth are severely constrained. Sensing, processing and communication are three key elements whose combination in one tiny device gives rise to a vast number of applications. Sensor networks provide endless opportunities, but at the same time pose formidable challenges, such as the fact that energy is a scarce and usually non-renewable resource. However, recent advances in low power Very Large Scale Integration, embedded computing, communication hardware, and in general, the convergence of computing and communications, are making this emerging technology a reality. Likewise, advances in nanotechnology and Micro Electro-Mechanical Systems are pushing toward networks of tiny distributed sensors and actuators. There are different sensors such as pressure, accelerometer, camera, thermal, and microphone. They monitor conditions at different locations, such as temperature, humidity, vehicular movement, lightning condition, pressure, soil makeup, noise levels, the presence or absence of certain kinds of objects, mechanical stress levels on attached objects, the current characteristics such as speed, direction and size of an object, etc. The state of Wireless Sensor Networks will be checked and the most famous protocols reviewed. As Radio Frequency Identification (RFID) is becoming extremely present and important nowadays, it will be examined as well. RFID has a crucial role to play in business and for individuals alike going forward. The impact of ‘wireless’ identification is exerting strong pressures in RFID technology and services research and development, standards development, security compliance and privacy, and many more. The economic value is proven in some countries while others are just on the verge of planning or in pilot stages, but the wider spread of usage has yet to take hold or unfold through the modernisation of business models and applications. Possible applications of sensor networks are of interest to the most diverse fields. Environmental monitoring, warfare, child education, surveillance, micro-surgery, and agriculture are only a few examples. Some real hardware applications in the United States of America will be checked as it is probably the country that has investigated most in this area. Universities like Berkeley, UCLA (University of California, Los Angeles) Harvard and enterprises such as Intel are leading those investigations. But not just USA has been using and investigating wireless sensor networks. University of Southampton e.g. is to develop technology to monitor glacier behaviour using sensor networks contributing to fundamental research in glaciology and wireless sensor networks. Coalesenses GmbH (Germany) and ETH Zurich are working in applying wireless sensor networks in many different areas too. A Spanish solution will be the one examined more thoroughly for being innovative, adaptable and multipurpose. This study of the sensor has been focused mainly to traffic applications but it cannot be forgotten the more than 50 different application compilation that has been published by this specific sensor’s firm. Currently there are many vehicle surveillance technologies including loop sensors, video cameras, image sensors, infrared sensors, microwave radar, GPS, etc. The performance is acceptable but not sufficient because of their limited coverage and expensive costs of implementation and maintenance, specially the last one. They have defects such as: line-ofsight, low exactness, depending on environment and weather, cannot perform no-stop work whether daytime or night, high costs for installation and maintenance, etc. Consequently, in actual traffic applications the received data is insufficient or bad in terms of real-time owed to detector quantity and cost. With the increase of vehicle in urban road networks, the vehicle detection technologies are confronted with new requirements. Wireless sensor network is the state of the art technology and a revolution in remote information sensing and collection applications. It has broad prospect of application in intelligent transportation system. An application for target tracking and counting using a network of binary sensors has been developed. This would allow the appliance to spend much less energy when transmitting information and to make more independent devices in order to have a better traffic control. The application is focused on the efficacy of collaborative tracking rather than on the communication protocols used by the sensor nodes. Holiday crowds are a good case in which it is necessary to keep count of the cars on the roads. To this end a Matlab simulation has been produced for target tracking and counting using a network of binary sensors that e.g. could be implemented in Libelium’s solution. Libelium is the enterprise that has developed the sensor that will be deeply examined. This would allow the appliance to spend much less energy when transmitting information and to make more independent devices. The promising results obtained indicate that binary proximity sensors can form the basis for a robust architecture for wide area surveillance and tracking. When the target paths are smooth enough ClusterTrack particle filter algorithm gives excellent performance in terms of identifying and tracking different target trajectories. This algorithm could, of course, be used for different applications and that could be done in future researches. It is not surprising that binary proximity sensor networks have attracted a lot of attention lately. Despite the minimal information a binary proximity sensor provides, networks of these sensing modalities can track all kinds of different targets classes accurate enough.
Resumo:
Software Product Line Engineering (SPLE) has proved to have significant advantages in family-based software development, but also implies the up¬front design of a product-line architecture (PLA) from which individual product applications can be engineered. The big upfront design associated with PLAs is in conflict with the current need of "being open to change". However, the turbulence of the current business climate makes change inevitable in order to stay competitive, and requires PLAs to be open to change even late in the development. The trend of "being open to change" is manifested in the Agile Software Development (ASD) paradigm, but it is spreading to the domain of SPLE. To reduce the big upfront design of PLAs as currently practiced in SPLE, new paradigms are being created, one being Agile Product Line Engineering (APLE). APLE aims to make the development of product-lines more flexible and adaptable to changes as promoted in ASD. To put APLE into practice it is necessary to make mechanisms available to assist and guide the agile construction and evolution of PLAs while complying with the "be open to change" agile principle. This thesis defines a process for "the agile construction and evolution of product-line architectures", which we refer to as Agile Product-Line Archi-tecting (APLA). The APLA process provides agile architects with a set of models for describing, documenting and tracing PLAs, as well as an algorithm to analyze change impact. Both the models and the change impact analysis offer the following capabilities: Flexibility & adaptability at the time of defining software architectures, enabling change during the incremental and iterative design of PLAs (anticipated or planned changes) and their evolution (unanticipated or unforeseen changes). Assistance in checking architectural integrity through change impact analysis in terms of architectural concerns, such as dependencies on earlier design decisions, rationale, constraints, and risks, etc.Guidance in the change decision-making process through change im¬pact analysis in terms of architectural components and connections. Therefore, APLA provides the mechanisms required to construct and evolve PLAs that can easily be refined iteration after iteration during the APLE development process. These mechanisms are provided in a modeling frame¬work called FPLA. The contributions of this thesis have been validated through the conduction of a project regarding a metering management system in electrical power networks. This case study took place in an i-smart software factory and was in collaboration with the Technical University of Madrid and Indra Software Labs. La Ingeniería de Líneas de Producto Software (Software Product Line Engi¬neering, SPLE) ha demostrado tener ventajas significativas en el desarrollo de software basado en familias de productos. SPLE es un paradigma que se basa en la reutilización sistemática de un conjunto de características comunes que comparten los productos de un mismo dominio o familia, y la personalización masiva a través de una variabilidad bien definida que diferencia unos productos de otros. Este tipo de desarrollo requiere el diseño inicial de una arquitectura de línea de productos (Product-Line Architecture, PLA) a partir de la cual los productos individuales de la familia son diseñados e implementados. La inversión inicial que hay que realizar en el diseño de PLAs entra en conflicto con la necesidad actual de estar continuamente "abierto al cam¬bio", siendo este cambio cada vez más frecuente y radical en la industria software. Para ser competitivos es inevitable adaptarse al cambio, incluso en las últimas etapas del desarrollo de productos software. Esta tendencia se manifiesta de forma especial en el paradigma de Desarrollo Ágil de Software (Agile Software Development, ASD) y se está extendiendo también al ámbito de SPLE. Con el objetivo de reducir la inversión inicial en el diseño de PLAs en la manera en que se plantea en SPLE, en los último años han surgido nuevos enfoques como la Ingeniera de Líneas de Producto Software Ágiles (Agile Product Line Engineering, APLE). APLE propone el desarrollo de líneas de producto de forma más flexible y adaptable a los cambios, iterativa e incremental. Para ello, es necesario disponer de mecanismos que ayuden y guíen a los arquitectos de líneas de producto en el diseño y evolución ágil de PLAs, mientras se cumple con el principio ágil de estar abierto al cambio. Esta tesis define un proceso para la "construcción y evolución ágil de las arquitecturas de lineas de producto software". A este proceso se le ha denominado Agile Product-Line Architecting (APLA). El proceso APLA proporciona a los arquitectos software un conjunto de modelos para de¬scribir, documentar y trazar PLAs, así como un algoritmo para analizar vel impacto del cambio. Los modelos y el análisis del impacto del cambio ofrecen: Flexibilidad y adaptabilidad a la hora de definir las arquitecturas software, facilitando el cambio durante el diseño incremental e iterativo de PLAs (cambios esperados o previstos) y su evolución (cambios no previstos). Asistencia en la verificación de la integridad arquitectónica mediante el análisis de impacto de los cambios en términos de dependencias entre decisiones de diseño, justificación de las decisiones de diseño, limitaciones, riesgos, etc. Orientación en la toma de decisiones derivadas del cambio mediante el análisis de impacto de los cambios en términos de componentes y conexiones. De esta manera, APLA se presenta como una solución para la construcción y evolución de PLAs de forma que puedan ser fácilmente refinadas iteración tras iteración de un ciclo de vida de líneas de producto ágiles. Dicha solución se ha implementado en una herramienta llamada FPLA (Flexible Product-Line Architecture) y ha sido validada mediante su aplicación en un proyecto de desarrollo de un sistema de gestión de medición en redes de energía eléctrica. Dicho proyecto ha sido desarrollado en una fábrica de software global en colaboración con la Universidad Politécnica de Madrid e Indra Software Labs.
Resumo:
Mealiness is a textural attribute related to an internal fruit disorder that involves quality loss. It is characterised by the combination of abnormal softness of the fruit and absence of free juiciness in the mouth when eaten by the consumer. Recent research concluded with the development of precise instrumental procedure to measure a scale of mealiness based on the combination of several rheological properties and empirical magnitudes. In this line, time-domain laser reflectance spectroscopy (TDRS) is a medical technology, new in agrofood research, which is capable of obtaining physical and chemical information independently and simultaneously, and this can be of interest to characterise mealiness. Using VIS & NIR lasers as light sources, TDRS was applied in this work to Golden Delicious and Cox apples (n=90), conforming several batches of untreated samples and storage-treated (20°C & 95%RH) to promote the development of mealiness. The collected database was clustered into different groups according to their instrumental test values (Barreiro et al, 1998). The optical coefficients were used as explanatory variables when building discriminant analysis functions for mealiness, achieving a classification score above 80% of correctly identified mealy versus fresh apples.
Resumo:
An asymmetric stripline is proposed in this paper. The main aim of this line is to distribute the power among subarrays in an array with minimum losses. Several vertical transitions to subarrays are shown besides some network designs at X band for a square array for satellite communications.
Resumo:
In order to achieve total selectivity at electrical distribution networks it is of great importance to analyze the defect currents at ungrounded power systems. This information will help to grant selectivity at electrical distribution networks ensuring that only the defect line or feeder is removed from service. In the present work a new selective and directional protection method for ungrounded power systems is evaluated. The new method measures only defect currents to detect earth faults and works with a directional criterion to determine the line under faulty conditions. The main contribution of this new technique is that it can detect earth faults in outgoing lines at any type of substation avoiding the possible mismatch of traditional directional earth fault relays. This detection technique is based on the comparison of the direction of a reference current to the direction of all earth fault capacitive currents at all the feeders connected to the same bus bars. This new method has been validated through computer simulations. The results for the different cases studied are remarkable, proving total validity and usefulness of the new method.
Resumo:
This paper proposes a new method, oriented to crop row detection in images from maize fields with high weed pressure. The vision system is designed to be installed onboard a mobile agricultural vehicle, i.e. submitted to gyros, vibrations and undesired movements. The images are captured under image perspective, being affected by the above undesired effects. The image processing consists of three main processes: image segmentation, double thresholding, based on the Otsu’s method, and crop row detection. Image segmentation is based on the application of a vegetation index, the double thresholding achieves the separation between weeds and crops and the crop row detection applies least squares linear regression for line adjustment. Crop and weed separation becomes effective and the crop row detection can be favorably compared against the classical approach based on the Hough transform. Both gain effectiveness and accuracy thanks to the double thresholding that makes the main finding of the paper.
Resumo:
Mealiness is a textural attribute related to an internal fruit disorder that involves quality loss. It is characterised by the combination of abnormal softness of the fruit and absence of free juiciness in the mouth when eaten by the consumer. Recent research concluded with the development of precise instrumental procedure to measure a scale of mealiness based on the combination of several rheological properties and empirical magnitudes. In this line, time-domain laser reflectance spectroscopy (TDRS) is a new medical technology, used to characterise the optical properties of tissues, and to locate affected areas like tumours. Among its advantages compared to more traditional spectroscopic techniques, there is the feasibility to asses simultaneously and independently two optical parameters: the absorption of the light inside the irradiated body, and the scattering of the photons across the tissues, at each wavelength, generating two coefficients (µa, absorption coeff.; and µ's, transport scattering coeff.). If it is assumed that they are related respectively to chemical components and to physical properties of the sample, TDRS can be applied to the quantification of chemicals and the measurement of the rheological properties (i.e. mealiness estimation) at the same time. Using VIS & NIR lasers as light sources, TDRS was applied in this work to Golden Delicious and Cox apples (n=90), conforming several batches of untreated samples and storage-treated (20°C & 95%RH) to promote the development of mealiness. The collected database was clustered into different groups according to their instrumental test values (Barreiro et al, 1998). The optical coefficients were used as explanatory variables when building discriminant analysis functions for mealiness, achieving a classification score above 80% of correctly identified mealy versus fresh apples.