428 resultados para Subsystem
Resumo:
While the robots gradually become a part of our daily lives, they already play vital roles in many critical operations. Some of these critical tasks include surgeries, battlefield operations, and tasks that take place in hazardous environments or distant locations such as space missions. ^ In most of these tasks, remotely controlled robots are used instead of autonomous robots. This special area of robotics is called teleoperation. Teleoperation systems must be reliable when used in critical tasks; hence, all of the subsystems must be dependable even under a subsystem or communication line failure. ^ These systems are categorized as unilateral or bilateral teleoperation. A special type of bilateral teleoperation is described as force-reflecting teleoperation, which is further investigated as limited- and unlimited-workspace teleoperation. ^ Teleoperation systems configured in this study are tested both in numerical simulations and experiments. A new method, Virtual Rapid Robot Prototyping, is introduced to create system models rapidly and accurately. This method is then extended to configure experimental setups with actual master systems working with system models of the slave robots accompanied with virtual reality screens as well as the actual slaves. Fault-tolerant design and modeling of the master and slave systems are also addressed at different levels to prevent subsystem failure. ^ Teleoperation controllers are designed to compensate for instabilities due to communication time delays. Modifications to the existing controllers are proposed to configure a controller that is reliable in communication line failures. Position/force controllers are also introduced for master and/or slave robots. Later, controller architecture changes are discussed in order to make these controllers dependable even in systems experiencing communication problems. ^ The customary and proposed controllers for teleoperation systems are tested in numerical simulations on single- and multi-DOF teleoperation systems. Experimental studies are then conducted on seven different systems that included limited- and unlimited-workspace teleoperation to verify and improve simulation studies. ^ Experiments of the proposed controllers were successful relative to the customary controllers. Overall, by employing the fault-tolerance features and the proposed controllers, a more reliable teleoperation system is possible to design and configure which allows these systems to be used in a wider range of critical missions. ^
Resumo:
Since the 1990s, scholars have paid special attention to public management’s role in theory and research under the assumption that effective management is one of the primary means for achieving superior performance. To some extent, this was influenced by popular business writings of the 1980s as well as the reinventing literature of the 1990s. A number of case studies but limited quantitative research papers have been published showing that management matters in the performance of public organizations. ^ My study examined whether or not management capacity increased organizational performance using quantitative techniques. The specific research problem analyzed was whether significant differences existed between high and average performing public housing agencies on select criteria identified in the Government Performance Project (GPP) management capacity model, and whether this model could predict outcome performance measures in a statistically significant manner, while controlling for exogenous influences. My model included two of four GPP management subsystems (human resources and information technology), integration and alignment of subsystems, and an overall managing for results framework. It also included environmental and client control variables that were hypothesized to affect performance independent of management action. ^ Descriptive results of survey responses showed high performing agencies with better scores on most high performance dimensions of individual criteria, suggesting support for the model; however, quantitative analysis found limited statistically significant differences between high and average performers and limited predictive power of the model. My analysis led to the following major conclusions: past performance was the strongest predictor of present performance; high unionization hurt performance; and budget related criterion mattered more for high performance than other model factors. As to the specific research question, management capacity may be necessary but it is not sufficient to increase performance. ^ The research suggested managers may benefit by implementing best practices identified through the GPP model. The usefulness of the model could be improved by adding direct service delivery to the model, which may also improve its predictive power. Finally, there are abundant tested concepts and tools designed to improve system performance that are available for practitioners designed to improve management subsystem support of direct service delivery.^
Resumo:
While the robots gradually become a part of our daily lives, they already play vital roles in many critical operations. Some of these critical tasks include surgeries, battlefield operations, and tasks that take place in hazardous environments or distant locations such as space missions. In most of these tasks, remotely controlled robots are used instead of autonomous robots. This special area of robotics is called teleoperation. Teleoperation systems must be reliable when used in critical tasks; hence, all of the subsystems must be dependable even under a subsystem or communication line failure. These systems are categorized as unilateral or bilateral teleoperation. A special type of bilateral teleoperation is described as force-reflecting teleoperation, which is further investigated as limited- and unlimited-workspace teleoperation. Teleoperation systems configured in this study are tested both in numerical simulations and experiments. A new method, Virtual Rapid Robot Prototyping, is introduced to create system models rapidly and accurately. This method is then extended to configure experimental setups with actual master systems working with system models of the slave robots accompanied with virtual reality screens as well as the actual slaves. Fault-tolerant design and modeling of the master and slave systems are also addressed at different levels to prevent subsystem failure. Teleoperation controllers are designed to compensate for instabilities due to communication time delays. Modifications to the existing controllers are proposed to configure a controller that is reliable in communication line failures. Position/force controllers are also introduced for master and/or slave robots. Later, controller architecture changes are discussed in order to make these controllers dependable even in systems experiencing communication problems. The customary and proposed controllers for teleoperation systems are tested in numerical simulations on single- and multi-DOF teleoperation systems. Experimental studies are then conducted on seven different systems that included limited- and unlimited-workspace teleoperation to verify and improve simulation studies. Experiments of the proposed controllers were successful relative to the customary controllers. Overall, by employing the fault-tolerance features and the proposed controllers, a more reliable teleoperation system is possible to design and configure which allows these systems to be used in a wider range of critical missions.
Resumo:
Shipboard power systems have different characteristics than the utility power systems. In the Shipboard power system it is crucial that the systems and equipment work at their peak performance levels. One of the most demanding aspects for simulations of the Shipboard Power Systems is to connect the device under test to a real-time simulated dynamic equivalent and in an environment with actual hardware in the Loop (HIL). The real time simulations can be achieved by using multi-distributed modeling concept, in which the global system model is distributed over several processors through a communication link. The advantage of this approach is that it permits the gradual change from pure simulation to actual application. In order to perform system studies in such an environment physical phase variable models of different components of the shipboard power system were developed using operational parameters obtained from finite element (FE) analysis. These models were developed for two types of studies low and high frequency studies. Low frequency studies are used to examine the shipboard power systems behavior under load switching, and faults. High-frequency studies were used to predict abnormal conditions due to overvoltage, and components harmonic behavior. Different experiments were conducted to validate the developed models. The Simulation and experiment results show excellent agreement. The shipboard power systems components behavior under internal faults was investigated using FE analysis. This developed technique is very curial in the Shipboard power systems faults detection due to the lack of comprehensive fault test databases. A wavelet based methodology for feature extraction of the shipboard power systems current signals was developed for harmonic and fault diagnosis studies. This modeling methodology can be utilized to evaluate and predicate the NPS components future behavior in the design stage which will reduce the development cycles, cut overall cost, prevent failures, and test each subsystem exhaustively before integrating it into the system.
Resumo:
A major and growing problems faced by modern society is the high production of waste and related effects they produce, such as environmental degradation and pollution of various ecosystems, with direct effects on quality of life. The thermal treatment technologies have been widely used in the treatment of these wastes and thermal plasma is gaining importance in processing blanketing. This work is focused on developing an optimized system of supervision and control applied to a processing plant and petrochemical waste effluents using thermal plasma. The system is basically composed of a inductive plasma torch reactors washing system / exhaust gases and RF power used to generate plasma. The process of supervision and control of the plant is of paramount importance in the development of the ultimate goal. For this reason, various subsidies were created in the search for greater efficiency in the process, generating events, graphics / distribution and storage of data for each subsystem of the plant, process execution, control and 3D visualization of each subsystem of the plant between others. A communication platform between the virtual 3D plant architecture and a real control structure (hardware) was created. The goal is to use the concepts of mixed reality and develop strategies for different types of controls that allow manipulating 3D plant without restrictions and schedules, optimize the actual process. Studies have shown that one of the best ways to implement the control of generation inductively coupled plasma techniques is to use intelligent control, both for their efficiency in the results is low for its implementation, without requiring a specific model. The control strategy using Fuzzy Logic (Fuzzy-PI) was developed and implemented, and the results showed satisfactory condition on response time and viability
Resumo:
Through numerous technological advances in recent years along with the popularization of computer devices, the company is moving towards a paradigm “always connected”. Computer networks are everywhere and the advent of IPv6 paves the way for the explosion of the Internet of Things. This concept enables the sharing of data between computing machines and objects of day-to-day. One of the areas placed under Internet of Things are the Vehicular Networks. However, the information generated individually for a vehicle has no large amount and does not contribute to an improvement in transit, once information has been isolated. This proposal presents the Infostructure, a system that has to facilitate the efforts and reduce costs for development of applications context-aware to high-level semantic for the scenario of Internet of Things, which allows you to manage, store and combine the data in order to generate broader context. To this end we present a reference architecture, which aims to show the major components of the Infostructure. Soon after a prototype is presented which is used to validate our work reaches the level of contextualization desired high level semantic as well as a performance evaluation, which aims to evaluate the behavior of the subsystem responsible for managing contextual information on a large amount of data. After statistical analysis is performed with the results obtained in the evaluation. Finally, the conclusions of the work and some problems such as no assurance as to the integrity of the sensory data coming Infostructure, and future work that takes into account the implementation of other modules so that we can conduct tests in real environments are presented.
Resumo:
This dissertation consists of two independent musical compositions and an article detailing the process of the design and assembly of an electric guitar with particular emphasis on the carefully curated suite of embedded effects.
The first piece, 'Phase Locked Loop and Modulo Games' is scored for electric guitar and a single echo of equal volume less than a beat away. One could think of the piece as a 15 minute canon at the unison at the dotted eighth note (or at times the quarter or triplet-quarter), however the compositional motivation is more about weaving a composite texture between the guitar and its echo that is, while in theory extremely contrapuntal, in actuality is simply a single [superhuman] melodic line.
The second piece, 'The Dogma Loops' picks up a few compositional threads left by ‘Phase Locked Loop’ and weaves them into an entirely new tapestry. 'Phase Locked Loop' is motivated by the creation of a complex musical composite that is for the most part electronically transparent. 'The Dogma Loops' questions that same notion of composite electronic complexity by essentially asking a question: "what are the inputs to an interactive electronic system that create the most complex outputs via the simplest musical means possible?"
'The Dogma Loops' is scored for Electric Guitar (doubling on Ukulele), Violin and Violoncello. All of the principal instruments require an electronic pickup (except the Uke). The work is in three sections played attacca; [Automation Games], [Point of Origin] and [Cloning Vectors].
The third and final component of the document is the article 'Finding Ibrida.' This article details the process of the design and assembly of an electric guitar with integrated effects, while also providing the deeper context (conceptual and technical) which motivated the efforts and informed the challenges to hybridize the various technologies (tubes, transistors, digital effects and a microcontroller subsystem). The project was motivated by a desire for rigorous technical and hands-on engagement with analog signal processing as applied to the electric guitar. ‘Finding Ibrida’ explores sound, some myths and lore of guitar tech and the history of electric guitar distortion and its culture of sonic exploration.
Resumo:
Microturbines are among the most successfully commercialized distributed energy resources, especially when they are used for combined heat and power generation. However, the interrelated thermal and electrical system dynamic behaviors have not been fully investigated. This is technically challenging due to the complex thermo-fluid-mechanical energy conversion processes which introduce multiple time-scale dynamics and strong nonlinearity into the analysis. To tackle this problem, this paper proposes a simplified model which can predict the coupled thermal and electric output dynamics of microturbines. Considering the time-scale difference of various dynamic processes occuring within microturbines, the electromechanical subsystem is treated as a fast quasi-linear process while the thermo-mechanical subsystem is treated as a slow process with high nonlinearity. A three-stage subspace identification method is utilized to capture the dominant dynamics and predict the electric power output. For the thermo-mechanical process, a radial basis function model trained by the particle swarm optimization method is employed to handle the strong nonlinear characteristics. Experimental tests on a Capstone C30 microturbine show that the proposed modeling method can well capture the system dynamics and produce a good prediction of the coupled thermal and electric outputs in various operating modes.
Resumo:
Microturbines are among the most successfully commercialized distributed energy resources, especially when they are used for combined heat and power generation. However, the interrelated thermal and electrical system dynamic behaviors have not been fully investigated. This is technically challenging due to the complex thermo-fluid-mechanical energy conversion processes which introduce multiple time-scale dynamics and strong nonlinearity into the analysis. To tackle this problem, this paper proposes a simplified model which can predict the coupled thermal and electric output dynamics of microturbines. Considering the time-scale difference of various dynamic processes occuring within microturbines, the electromechanical subsystem is treated as a fast quasi-linear process while the thermo-mechanical subsystem is treated as a slow process with high nonlinearity. A three-stage subspace identification method is utilized to capture the dominant dynamics and predict the electric power output. For the thermo-mechanical process, a radial basis function model trained by the particle swarm optimization method is employed to handle the strong nonlinear characteristics. Experimental tests on a Capstone C30 microturbine show that the proposed modeling method can well capture the system dynamics and produce a good prediction of the coupled thermal and electric outputs in various operating modes.
Resumo:
This thesis aimed to evaluate the implementation of the Food Acquisition Program(PAA) through CONAB RN in the period of 2003-2010 with the perception of all agents involved in the implementation of the government program.For the methodological trajectory it was adopted a descriptive bibliographical and documentary approach with triangular qualitative and quantitative, also called evaluative research.The theoretical model was supported by the authors Draibe (2001), Aguilar and Ander-Egg (1994) and Silva(2001), among others, that focused on family farming and evaluation of implementation of public policy having as a category of analysis the size implementation of policy and the latter divided into 10 theoretical dimensions.The universe consisted of three groups: the first were the managers and technicians from CONAB(RN and Brasilia), totaling 15 subjects. The second group was of associations/cooperatives that participated in the programin 2010, totaling a sample in each access of 15 representatives. The third group of subjects totaled with 309 representatives of governmental and non-governmental organizations that received donations of food for the same period. Semi-structured interviews and forms were adopted as instruments of data collection.The data were processed qualitatively by the analysis of content (interviews and documents) and quantitatively by means of statistical tests that allowed inferences and adoption of frequencies. Among the key find ingests that the program is not standing as a structure supported by planning. The interests of the performers do not necessarily converge with the objectives of the Food Acquisition Program (PAA). A shockof goals was identified (within the same program) when comparingthe financial agent (Ministry of Rural Development and of Social Development and Fight Against Hunger Ministry r) and the executor, CONAB/RN. Within the assessed dimensions, the most fragileis the sub-managerial decision-making and Organizational Environment and internal assessment, still deserves attention the sizeof logistical and operational Subsystem, as this also proved weak.The focusin the quest toexpand thequantificationof the resultsof theFood Acquisition Program (PAA)by CONAB/RN does forget a quality management focused on what really should be:the compliance with the institutional objectives of the government program.Finally, the perspective for the traded implementation should be re-examined because excessive discretion by managers along with technical staff has characterized there al role of the Food Acquisition Program (PAA) as public policy. We conclude that the implementation model, which apparently aggregates values to the benefitted citizens, has weakened the context of work on family farms having the management model of the implementation process be reviewed by the Federal Government and point too ther paths, which have as a guide line the emancipation and developmentof the field or in the field andat the same time enables the reduction of nutritional deficiency of beneficiaries in a balanced and coherent way
Resumo:
Software Architecture is a high level description of a software intensive system that enables architects to have a better intellectual control over the complete system. It is also used as a communication vehicle among the various system stakeholders. Variability in software-intensive systems is the ability of a software artefact (e.g., a system, subsystem, or component) to be extended, customised, or configured for deployment in a specific context. Although variability in software architecture is recognised as a challenge in multiple domains, there has been no formal consensus on how variability should be captured or represented. In this research, we addressed the problem of representing variability in software architecture through a three phase approach. First, we examined existing literature using the Systematic Literature Review (SLR) methodology, which helped us identify the gaps and challenges within the current body of knowledge. Equipped with the findings from the SLR, a set of design principles have been formulated that are used to introduce variability management capabilities to an existing Architecture Description Language (ADL). The chosen ADL was developed within our research group (ALI) and to which we have had complete access. Finally, we evaluated the new version of the ADL produced using two distinct case studies: one from the Information Systems domain, an Asset Management System (AMS); and another from the embedded systems domain, a Wheel Brake System (WBS). This thesis presents the main findings from the three phases of the research work, including a comprehensive study of the state-of-the-art; the complete specification of an ADL that is focused on managing variability; and the lessons learnt from the evaluation work of two distinct real-life case studies.
Resumo:
In the past decades, social-ecological systems (SESs) worldwide have undergone dramatic transformations with often detrimental consequences for livelihoods. Although resilience thinking offers promising conceptual frameworks to understand SES transformations, empirical resilience assessments of real-world SESs are still rare because SES complexity requires integrating knowledge, theories, and approaches from different disciplines. Taking up this challenge, we empirically assess the resilience of a South African pastoral SES to drought using various methods from natural and social sciences. In the ecological subsystem, we analyze rangelands’ ability to buffer drought effects on forage provision, using soil and vegetation indicators. In the social subsystem, we assess households’ and communities’ capacities to mitigate drought effects, applying agronomic and institutional indicators and benchmarking against practices and institutions in traditional pastoral SESs. Our results indicate that a decoupling of livelihoods from livestock-generated income was initiated by government interventions in the 1930s. In the post-apartheid phase, minimum-input strategies of herd management were adopted, leading to a recovery of rangeland vegetation due to unintentionally reduced stocking densities. Because current livelihood security is mainly based on external monetary resources (pensions, child grants, and disability grants), household resilience to drought is higher than in historical phases. Our study is one of the first to use a truly multidisciplinary resilience assessment. Conflicting results from partial assessments underline that measuring narrow indicator sets may impede a deeper understanding of SES transformations. The results also imply that the resilience of contemporary, open SESs cannot be explained by an inward-looking approach because essential connections and drivers at other scales have become relevant in the globalized world. Our study thus has helped to identify pitfalls in empirical resilience assessment and to improve the conceptualization of SES dynamics.
Resumo:
L’augmentation exponentielle de la demande de bande passante pour les communications laisse présager une saturation prochaine de la capacité des réseaux de télécommunications qui devrait se matérialiser au cours de la prochaine décennie. En effet, la théorie de l’information prédit que les effets non linéaires dans les fibres monomodes limite la capacité de transmission de celles-ci et peu de gain à ce niveau peut être espéré des techniques traditionnelles de multiplexage développées et utilisées jusqu’à présent dans les systèmes à haut débit. La dimension spatiale du canal optique est proposée comme un nouveau degré de liberté qui peut être utilisé pour augmenter le nombre de canaux de transmission et, par conséquent, résoudre cette menace de «crise de capacité». Ainsi, inspirée par les techniques micro-ondes, la technique émergente appelée multiplexage spatial (SDM) est une technologie prometteuse pour la création de réseaux optiques de prochaine génération. Pour réaliser le SDM dans les liens de fibres optiques, il faut réexaminer tous les dispositifs intégrés, les équipements et les sous-systèmes. Parmi ces éléments, l’amplificateur optique SDM est critique, en particulier pour les systèmes de transmission pour les longues distances. En raison des excellentes caractéristiques de l’amplificateur à fibre dopée à l’erbium (EDFA) utilisé dans les systèmes actuels de pointe, l’EDFA est à nouveau un candidat de choix pour la mise en œuvre des amplificateurs SDM pratiques. Toutefois, étant donné que le SDM introduit une variation spatiale du champ dans le plan transversal de la fibre, les amplificateurs à fibre dopée à l’erbium spatialement intégrés (SIEDFA) nécessitent une conception soignée. Dans cette thèse, nous examinons tout d’abord les progrès récents du SDM, en particulier les amplificateurs optiques SDM. Ensuite, nous identifions et discutons les principaux enjeux des SIEDFA qui exigent un examen scientifique. Suite à cela, la théorie des EDFA est brièvement présentée et une modélisation numérique pouvant être utilisée pour simuler les SIEDFA est proposée. Sur la base d’un outil de simulation fait maison, nous proposons une nouvelle conception des profils de dopage annulaire des fibres à quelques-modes dopées à l’erbium (ED-FMF) et nous évaluons numériquement la performance d’un amplificateur à un étage, avec fibre à dopage annulaire, à ainsi qu’un amplificateur à double étage pour les communications sur des fibres ne comportant que quelques modes. Par la suite, nous concevons des fibres dopées à l’erbium avec une gaine annulaire et multi-cœurs (ED-MCF). Nous avons évalué numériquement le recouvrement de la pompe avec les multiples cœurs de ces amplificateurs. En plus de la conception, nous fabriquons et caractérisons une fibre multi-cœurs à quelques modes dopées à l’erbium. Nous réalisons la première démonstration des amplificateurs à fibre optique spatialement intégrés incorporant de telles fibres dopées. Enfin, nous présentons les conclusions ainsi que les perspectives de cette recherche. La recherche et le développement des SIEDFA offriront d’énormes avantages non seulement pour les systèmes de transmission future SDM, mais aussi pour les systèmes de transmission monomode sur des fibres standards à un cœur car ils permettent de remplacer plusieurs amplificateurs par un amplificateur intégré.
Resumo:
The work presented in my thesis addresses the two cornerstones of modern astronomy: Observation and Instrumentation. Part I deals with the observation of two nearby active galaxies, the Seyfert 2 galaxy NGC 1433 and the Seyfert 1 galaxy NGC 1566, both at a distance of $\sim10$ Mpc, which are part of the Nuclei of Galaxies (NUGA) sample. It is well established that every galaxy harbors a super massive black hole (SMBH) at its center. Furthermore, there seems to be a fundamental correlation between the stellar bulge and SMBH masses. Simulations show that massive feedback, e.g., powerful outflows, in Quasi Stellar Objects (QSOs) has an impact on the mutual growth of bulge and SMBH. Nearby galaxies follow this relation but accrete mass at much lower rates. This gives rise to the following questions: Which mechanisms allow feeding of nearby Active Galactic Nuclei (AGN)? Is this feeding triggered by events, e.g., star formation, nuclear spirals, outflows, on $\sim500$ pc scales around the AGN? Does feedback on these scales play a role in quenching the feeding process? Does it have an effect on the star formation close to the nucleus? To answer these questions I have carried out observations with the Spectrograph for INtegral Field Observation in the Near Infrared (SINFONI) at the Very Large Telescope (VLT) situated on Cerro Paranal in Chile. I have reduced and analyzed the recorded data, which contain spatial and spectral information in the H-band ($1.45 \mic-1.85 \mic$) and K-band ($1.95 \mic-2.45 \mic$) on the central $10\arcsec\times10\arcsec$ of the observed galaxies. Additionally, Atacama Large Millimeter/Sub-millimeter Array (ALMA) data at $350$ GHz ($\sim0.87$ mm) as well as optical high resolution Hubble Space Telescope (HST) images are used for the analysis. For NGC 1433 I deduce from comparison of the distributions of gas, dust, and intensity of highly ionized emission lines that the galaxy center lies $\sim70$ pc north-northwest of the prior estimate. A velocity gradient is observed at the new center, which I interpret as a bipolar outflow, a circum nuclear disk, or a combination of both. At least one dust and gas arm leads from a $r\sim200$ pc ring towards the nucleus and might feed the SMBH. Two bright warm H$_2$ gas spots are detected that indicate hidden star formation or a spiral arm-arm interaction. From the stellar velocity dispersion (SVD) I estimate a SMBH mass of $\sim1.74\times10^7$ \msol. For NGC 1566 I observe a nuclear gas disk of $\sim150$ pc in radius with a spiral structure. I estimate the total mass of this disk to be $\sim5.4\times10^7$ \msol. What mechanisms excite the gas in the disk is not clear. Neither can the existence of outflows be proven nor is star formation detected over the whole disk. On one side of the spiral structure I detect a star forming region with an estimated star formation rate of $\sim2.6\times10^{-3}$ \msol\ yr$^{-1}$. From broad Br$\gamma$ emission and SVD I estimate a mean SMBH mass of $\sim5.3\times10^6$ \msol\ with an Eddington ratio of $\sim2\times10^{-3}$. Part II deals with the final tests of the Fringe and Flexure Tracker (FFTS) for LBT INterferometric Camera and the NIR/Visible Adaptive iNterferometer for Astronomy (LINC-NIRVANA) at the Large Binocular Telescope (LBT) in Arizona, USA, which I conducted. The FFTS is the subsystem that combines the two separate beams of the LBT and enables near-infrared interferometry with a significantly large field of view. The FFTS has a cryogenic system and an ambient temperature system which are separated by the baffle system. I redesigned this baffle to guarantee the functionality of the system after the final tests in the Cologne cryostat. The redesign did not affect any scientific performance of LINC-NIRVANA. I show in the final cooldown tests that the baffle fulfills the temperature requirement and stays $<110$ K whereas the moving stages in the ambient system stay $>273$ K, which was not given for the old baffle design. Additionally, I test the tilting flexure of the whole FFTS and show that accurate positioning of the detector and the tracking during observation can be guaranteed.
Resumo:
In modern society, the body health is a very important issue to everyone. With the development of the science and technology, the new and developed body health monitoring device and technology will play the key role in the daily medical activities. This paper focus on making progress in the design of the wearable vital sign system. A vital sign monitoring system has been proposed and designed. The whole detection system is composed of signal collecting subsystem, signal processing subsystem, short-range wireless communication subsystem and user interface subsystem. The signal collecting subsystem is composed of light source and photo diode, after emiting light of two different wavelength, the photo diode collects the light signal reflected by human body tissue. The signal processing subsystem is based on the analog front end AFE4490 and peripheral circuits, the collected analog signal would be filtered and converted into digital signal in this stage. After a series of processing, the signal would be transmitted to the short-range wireless communication subsystem through SPI, this subsystem is mainly based on Bluetooth 4.0 protocol and ultra-low power System on Chip(SoC) nRF51822. Finally, the signal would be transmitted to the user end. After proposing and building the system, this paper focus on the research of the key component in the system, that is, the photo detector. Based on the study of the perovskite materials, a low temperature processed photo detector has been proposed, designed and researched. The device is made up of light absorbing layer, electron transporting and hole blocking layer, hole transporting and electron blocking layer, conductive substrate layer and metal electrode layer. The light absorbing layer is the important part of whole device, and it is fabricated by perovskite materials. After accepting the light, the electron-hole pair would be produced in this layer, and due to the energy level difference, the electron and hole produced would be transmitted to metal electrode and conductive substrate electrode through electron transporting layer and hole transporting layer respectively. In this way the response current would be produced. Based on this structure, the specific fabrication procedure including substrate cleaning; PEDOT:PSS layer preparation; pervoskite layer preparation; PCBM layer preparation; C60, BCP, and Ag electrode layer preparation. After the device fabrication, a series of morphological characterization and performance testing has been done. The testing procedure including film-forming quality inspection, response current and light wavelength analysis, linearity and response time and other optical and electrical properties testing. The testing result shows that the membrane has been fabricated uniformly; the device can produce obvious response current to the incident light with the wavelength from 350nm to 800nm, and the response current could be changed along with the light wavelength. When the light wavelength keeps constant, there exists a good linear relationship between the intensity of the response current and the power of the incident light, based on which the device could be used as the photo detector to collect the light information. During the changing period of the light signal, the response time of the device is several microseconds, which is acceptable working as a photo detector in our system. The testing results show that the device has good electronic and optical properties, and the fabrication procedure is also repeatable, the properties of the devices has good uniformity, which illustrates the fabrication method and procedure could be used to build the photo detector in our wearable system. Based on a series of testing results, the paper has drawn the conclusion that the photo detector fabricated could be integrated on the flexible substrate and is also suitable for the monitoring system proposed, thus made some progress on the research of the wearable monitoring system and device. Finally, some future prospect in system design aspect and device design and fabrication aspect are proposed.