162 resultados para Leveraged buyout


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The perturbation of homeostatic mechanisms caused by interactions between any indwelling biomedical device and the biological medium into which it is implanted initiates a dynamic wound healing response from the host which can be rigorous and ongoing. The typical result of this response is a severe degradation in the performance and safety of the device, often to the extent of precluding their clinical use. Nitric oxide (NO) is an endogenously produced biomolecule capable of mediating many of the cellular processes leveraged against implanted devices. The in vivo performance of indwelling devices prepared with NO release coatings has recently been evaluated with very encouraging results. This work developed a platform capable of both generating programmable fluxes of NO and directly evaluating the performance of indwelling probes under different profiles of NO generation. This platform can be used to improve the efficacy of NO release materials in mitigating the host response.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The fulcrum upon which were leveraged many of the dramatic progressive changes in Montana that are documented "In the Crucible of Change" series was the lead up to, preparation, writing and adoption of the 1972 Montana Constitution. As Montana citizens exhibited their concern over the dysfunctional state government in MT under its 1889 Constitution, one of the areas that stood out as needing serious change was the Montana Legislature. Meeting for only sixty calendar days every two years, the Legislature regularly tried to carry off the subterfuge of stopping the wall clock at 11:59 PM on the sixtieth day and placing a shroud over it so they could continue to conduct business as if it were still the 60th day. Lawyers hired by the Anaconda Company drafted most bills that legislators wanted to have introduced. Malapportionment, especially in the State Senate where each county had one Senator regardless of their population, created a situation where Petroleum County with 800 residents had one senator while neighboring Yellowstone County with 80,000 people also had one senator -- a 100-1 differential in representation. Reapportionment imposed by rulings of the US Supreme Court in the mid-1960s created great furor in rural Montana to go along with the previous dissatisfaction of the urban centers. Stories of Anaconda Company “thumbs up – thumbs down” control of the votes were prevalent. Committee meeting and votes were done behind closed doors and recorded votes were non-existent except for the nearly meaningless final tally. People were in the dark about the creation of laws that affected their daily lives. It was clear that change in the Legislature had to take the form of change in the Constitution and, because it was not likely that the Legislature would advance Constitutional amendments on the subject, a convention seemed the only remedy. Once that Convention was called and went to work, it became apparent that the Legislative Article provided both opportunity for change and danger that too dramatic a change might sink the whole new document. The activities of the Legislative Committee and the whole Convention when acting upon Legislative issues provides one of the more compelling stories of change. The story of the Legislative Article of the Montana Constitution is discussed in this episode by three major players who were directly involved in the effort: Jerry Loendorf, Arlyne Reichert and Rich Bechtel. Their recollections of the activities surrounding the entire Constitutional Convention and specifically the Legislative Article provide an insider’s perspective of the development of the entire Constitution and the Legislative portion which was of such a high degree of interest to the people of Montana during the important period of progressive change documented “In the Crucible of Change.” Jerry Loendorf, who served as Chair of the Legislative Committee at the 1972 Montana Constitutional Convention, received a BA from Carroll College in 1961 and a JD from the University of Montana Law School in 1964. Upon graduation he served two years as a law clerk for the Montana Supreme Court after which he was for 34 years a partner in the law firm of Harrison, Loendorf & Posten, Duncan. In addition to being a delegate to the Constitutional Convention, Jerry served on the Board of Labor Appeals from 2000 to 2004. He was designated a Montana Special Assistant Attorney General to represent the state in federal court on the challenge to the results of the ratification election of Montana's Constitution in 1972. Jerry served on the Carroll College Board of Directors in the late 1960s and then again as a member of the Board of Trustees of Carroll College from 2001 to 2009. He has served on the Board of Directors of the Rocky Mountain Development Council since 1970 and was on the board of the Helena YMCA from 1981 to 1987. He also served on the board of the Good Samaritan Ministries from 2009 to 2014. On the business side, Jerry was on the Board of Directors of Valley Bank to Helena from 1980 to 2005. He is a member of the American Bar Association, State Bar of Montana, the First Judicial District Bar Association, and the Montana Trial Lawyers Association. Carroll College awarded Jerry the Warren Nelson Award 1994 and the Insignias Award in 2007. At Carroll College, Jerry has funded the following three scholarship endowments: George C and Helen T Loendorf, Gary Turcott, and Fr. William Greytek. Arlyne Reichert, Great Falls Delegate to the Constitutional Convention and former State Legislator, was born in Buffalo, NY in 1926 and attended University of Buffalo in conjunction with Cadet Nurses Training during WWII. She married a Montanan in Great Falls in 1945 and was widowed in 1968. She is mother of five, grandmother of seven, great-grandmother of four. Arlyne was employed by McLaughlin Research Institute in Great Falls for 23 years, serving as Technical Editor of Transplantation Journal in 1967, retiring as Assistant Director in 1989. In addition to being a state legislator (1979 Session) and a delegate to the 1972 Montana Constitutional Convention, she has filled many public roles, including Cascade County Study Commissioner (1974), MT Comprehensive Health Council, US Civil Rights Commission MT Advisory Committee, MT Capitol Restoration Committee, and Great Falls Public Library Trustee. Arlyne has engaged in many non-profit activities including League of Women Voters (State & Local Board Officer – from where her interest in the MT Constitutional change developed), Great Falls Public Radio Association (President & Founder), American Cancer Society (President Great Falls Chapter), Chair of MT Rhodes Scholarship Committee, and Council Member of the National Civic League. She also served a while as a Television Legislative Reporter. Arlyne has been recipient of numerous awards, the National Distinguished Citizens Award from the National Municipal League, two Women of Achievement Awards from Business & Professional Women, the Salute to Women Award by YWCA, Heritage Preservation Award from Cascade County Historical Society and the State of Montana, and the Heroes Award from Humanities Montana. She remains active, serving as Secretary-Treasurer of Preservation Cascade, Inc., and as Board Member of the McLaughlin Research Institute. Her current passion is applied to the preservation/saving of the historic 10th Street Bridge that crosses the Missouri River in Great Falls. Rich Bechtel of Helena was born in Napa, California in 1945 and grew up as an Air Force brat living in such places as Bitberg, Germany, Tripoli, Libya, and Sevilla, Spain. He graduated from Glasgow High School and the University of Montana. Rich was a graduate assistant for noted Montana History professor Professor K. Ross Toole, but dropped out of graduate school to pursue a real life in Montana politics and government. Rich has had a long, varied and colorful career in the public arena. He currently is the Director of the Office of Taxpayer Assistance & Public Outreach for MT’s Department of Revenue. He previously held two positions with the National Wildlife Federation in Washington, DC (Sr. Legislative Representative [1989-91] and Sr. Legislative Representative for Wildlife Policy [2004-2006]). While in Washington DC, he also was Assistant for Senator Lee Metcalf (D-MT), 1974-1976; Federal-State Coordinator for State of Montana, 1976-1989; Director of the Western Governors’ Association Washington Office, 1991-2000; and Director of Federal Affairs for Governor Kitzhaber of Oregon, 2001- 2003. Earlier in Montana Government, between 1971 and 1974, Rich was Research Analyst for MT Blue Ribbon Commission on Postsecondary Education, Legislative Consultant and Bill Drafter for MT Legislative Council, Research Analyst for the MT Constitutional Convention Commission where he provided original research on legislatures, as well as Researcher/Staff for the MT Constitutional Convention Legislative Committee, from where he drafted the various provisions of the Legislative Article and the majority and minority reports on behalf of the Committee members. Rich has represented Montana’s Governor on a trade and cultural mission to Republic of China and participated in US-German Acid Rain Committee sessions in Germany and with European Economic Community environmental officials in Belgium. He is married to Yvonne Seng (Ph.D.) - T’ai Chi apprentice; author and birder.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The use of XML for representation of eLearning-content and for automatic generation of different kinds of teaching media from this material is with all its advantages nowadays stateof-the-art. In the last years there have been numerous projects that leveraged XML-based production environments. At the end of the financial advancement the created materials have to be maintained with limited (human) resources. In the majority of cases this is only possible, if the authors care for their teaching material without extensive IT-support. From our point of view there has so far been a lack of intuitive usable XML editors. The prototype of such an XML editor “aXess” is introduced with the intention to encourage a broad discussion about the required features to manage the content of eLearning-materials.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper examines how a second-tier high-technology region leveraged corporate assets—mostly from transnational firms—in building a knowledge-based economy. The paper reviews how firm building and entrepreneurship influence the evolution of a peripheral regional economy. Using a case study of Boise, Idaho (the US), the research highlights several important sources of entrepreneurship. Entrepreneurial firm formation is closely linked with a region's ability to grow incubator organizations, particularly innovative firms. These innovative firms provide the training ground for entrepreneurs. Firms, however, differ and the ways in which firm building activities influence regional entrepreneurship depend on firm strategy and organization. Thus, second-tier high-tech regions in the US are taking a different path than their well-known counterparts such as Silicon Valley or Route 128 around Boston.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Gaining economic benefits from substantially lower labor costs has been reported as a major reason for offshoring labor-intensive information systems services to low-wage countries. However, if wage differences are so high, why is there such a high level of variation in the economic success between offshored IS projects? This study argues that offshore outsourcing involves a number of extra costs for the ^his paper was recommended for acceptance by Associate Guest Editor Erran Carmel. client organization that account for the economic failure of offshore projects. The objective is to disaggregate these extra costs into their constituent parts and to explain why they differ between offshored software projects. The focus is on software development and maintenance projects that are offshored to Indian vendors. A theoretical framework is developed a priori based on transaction cost economics (TCE) and the knowledge-based view of the firm, comple mented by factors that acknowledge the specific offshore context The framework is empirically explored using a multiple case study design including six offshored software projects in a large German financial service institution. The results of our analysis indicate that the client incurs post contractual extra costs for four types of activities: (1) re quirements specification and design, (2) knowledge transfer, (3) control, and (4) coordination. In projects that require a high level of client-specific knowledge about idiosyncratic business processes and software systems, these extra costs were found to be substantially higher than in projects where more general knowledge was needed. Notably, these costs most often arose independently from the threat of oppor tunistic behavior, challenging the predominant TCE logic of market failure. Rather, the client extra costs were parti cularly high in client-specific projects because the effort for managing the consequences of the knowledge asymmetries between client and vendor was particularly high in these projects. Prior experiences of the vendor with related client projects were found to reduce the level of extra costs but could not fully offset the increase in extra costs in highly client-specific projects. Moreover, cultural and geographic distance between client and vendor as well as personnel turnover were found to increase client extra costs. Slight evidence was found, however, that the cost-increasing impact of these factors was also leveraged in projects with a high level of required client-specific knowledge (moderator effect).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Survivors of childhood cancer carry a substantial burden of morbidity and are at increased risk for premature death. Furthermore, clear associations exist between specific therapeutic exposures and the risk for a variety of long-term complications. The entire landscape of health issues encountered for decades after successful completion of treatment is currently being explored in various collaborative research settings. These settings include large population-based or multi-institutional cohorts and single-institution studies. The ascertainment of outcomes has depended on self-reporting, linkage to registries, or clinical assessments. Survivorship research in the cooperative group setting, such as the Children's Oncology Group, has leveraged the clinical trials infrastructure to explore the molecular underpinnings of treatment-related adverse events, and to understand specific complications in the setting of randomized risk-reduction strategies. This review highlights the salient findings from these large collaborative initiatives, emphasizing the need for life-long follow-up of survivors of childhood cancer, and describing the development of several guidelines and efforts toward harmonization. Finally, the review reinforces the need to identify populations at highest risk, facilitating the development of risk prediction models that would allow for targeted interventions across the entire trajectory of survivorship.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In November 2010, nearly 110,000 people in the United States were waiting for organs for transplantation. Despite the fact that the organ donor registration rate has doubled in the last year, Texas has the lowest registration rate in the nation. Due to the need for improved registration rates in Texas, this practice-based culminating experience was to write an application for federal funding for the central Texas organ procurement organization, Texas Organ Sharing Alliance. The culminating experience has two levels of significance for public health – (1) to engage in an activity to promote organ donation registration, and (2) to provide professional experience in grant writing. ^ The process began with a literature review. The review was to identify successful intervention activities in motivating organ donation registration that could be used in intervention design for the grant application. Conclusions derived from the literature review included (1) the need to specifically encourage family discussions, (2) religious and community leaders can be leveraged to facilitate organ donation conversations in families, (3) communication content must be culturally sensitive and (4) ethnic disparities in transplantation must be acknowledged and discussed.^ Post the literature review; the experience followed a five step process of developing the grant application. The steps included securing permission to proceed, assembling a project team, creation of a project plan and timeline, writing each element of the grant application including the design of proposed intervention activities, and completion of the federal grant application. ^ After the grant application was written, an evaluation of the grant writing process was conducted. Opportunities for improvement were identified. The first opportunity was the need for better timeline management to allow for review of the application by an independent party, iterative development of the budget proposal, and development of collaborative partnerships. Another improvement opportunity was the management of conflict regarding the design of the intervention that stemmed from marketing versus evidence-based approaches. The most important improvement opportunity was the need to develop a more exhaustive evaluation plan.^ Eight supplementary files are attached to appendices: Feasibility Discussion in Appendix 1, Grant Guidance and Workshop Notes in Appendix 2, Presentation to Texas Organ Sharing Alliance in Appendix 3, Team Recruitment Presentation in Appendix 5, Grant Project Narrative in Appendix 7, Federal Application Form in Appendix 8, and Budget Workbook with Budget Narrative in Appendix 9.^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Enriching knowledge bases with multimedia information makes it possible to complement textual descriptions with visual and audio information. Such complementary information can help users to understand the meaning of assertions, and in general improve the user experience with the knowledge base. In this paper we address the problem of how to enrich ontology instances with candidate images retrieved from existing Web search engines. DBpedia has evolved into a major hub in the Linked Data cloud, interconnecting millions of entities organized under a consistent ontology. Our approach taps into the Wikipedia corpus to gather context information for DBpedia instances and takes advantage of image tagging information when this is available to calculate semantic relatedness between instances and candidate images. We performed experiments with focus on the particularly challenging problem of highly ambiguous names. Both methods presented in this work outperformed the baseline. Our best method leveraged context words from Wikipedia, tags from Flickr and type information from DBpedia to achieve an average precision of 80%.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Folksonomies emerge as the result of the free tagging activity of a large number of users over a variety of resources. They can be considered as valuable sources from which it is possible to obtain emerging vocabularies that can be leveraged in knowledge extraction tasks. However, when it comes to understanding the meaning of tags in folksonomies, several problems mainly related to the appearance of synonymous and ambiguous tags arise, specifically in the context of multilinguality. The authors aim to turn folksonomies into knowledge structures where tag meanings are identified, and relations between them are asserted. For such purpose, they use DBpedia as a general knowledge base from which they leverage its multilingual capabilities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Today P2P faces two important challenges: design of mechanisms to encourage users' collaboration in multimedia live streaming services; design of reliable algorithms with QoS provision, to encourage the multimedia providers employ the P2P topology in commercial live streaming systems. We believe that these two challenges are tightly-related and there is much to be done with respect. This paper analyzes the effect of user behavior in a multi-tree P2P overlay and describes a business model based on monetary discount as incentive in a P2P-Cloud multimedia streaming system. We believe a discount model can boost up users' cooperation and loyalty and enhance the overall system integrity and performance. Moreover the model bounds the constraints for a provider's revenue and cost if the P2P system is leveraged on a cloud infrastructure. Our case study shows that a streaming system provider can establish or adapt his business model by applying the described bounds to achieve a good discount-revenue trade-off and promote the system to the users.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Today P2P faces two important challenges: design of mechanisms to encourage users’ collaboration in multimedia live streaming services; design of reliable algorithms with QoS provision, to encourage multimedia providers employ the P2P topology in commercial streaming services. We believe that these two challenges are tightly-related and there is much to be done with respect. This paper proposes a novel monetary incentive for P2P multimedia streaming. The incentive model classifies the users in groups according to the perceived video quality. We apply the model to a streaming system’s billing model in order to evaluate its feasibility and visualize its quantitative effect on the users’ motivation and the provider’s profit. We conclude that monetary incentive can boost up users’ cooperation, loyalty and enhance the overall system integrity and performance. Moreover the model defines the constraints for the provider’s cost and profit when the system is leveraged on the cloud. Considering those constraints, a multimedia content provider can adapt the billing model of his streaming service and achieve desirable discount-profit trade-off. This will moreover contribute to better promotion of the service, across the users on the Internet.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El auge del "Internet de las Cosas" (IoT, "Internet of Things") y sus tecnologías asociadas han permitido su aplicación en diversos dominios de la aplicación, entre los que se encuentran la monitorización de ecosistemas forestales, la gestión de catástrofes y emergencias, la domótica, la automatización industrial, los servicios para ciudades inteligentes, la eficiencia energética de edificios, la detección de intrusos, la gestión de desastres y emergencias o la monitorización de señales corporales, entre muchas otras. La desventaja de una red IoT es que una vez desplegada, ésta queda desatendida, es decir queda sujeta, entre otras cosas, a condiciones climáticas cambiantes y expuestas a catástrofes naturales, fallos de software o hardware, o ataques maliciosos de terceros, por lo que se puede considerar que dichas redes son propensas a fallos. El principal requisito de los nodos constituyentes de una red IoT es que estos deben ser capaces de seguir funcionando a pesar de sufrir errores en el propio sistema. La capacidad de la red para recuperarse ante fallos internos y externos inesperados es lo que se conoce actualmente como "Resiliencia" de la red. Por tanto, a la hora de diseñar y desplegar aplicaciones o servicios para IoT, se espera que la red sea tolerante a fallos, que sea auto-configurable, auto-adaptable, auto-optimizable con respecto a nuevas condiciones que puedan aparecer durante su ejecución. Esto lleva al análisis de un problema fundamental en el estudio de las redes IoT, el problema de la "Conectividad". Se dice que una red está conectada si todo par de nodos en la red son capaces de encontrar al menos un camino de comunicación entre ambos. Sin embargo, la red puede desconectarse debido a varias razones, como que se agote la batería, que un nodo sea destruido, etc. Por tanto, se hace necesario gestionar la resiliencia de la red con el objeto de mantener la conectividad entre sus nodos, de tal manera que cada nodo IoT sea capaz de proveer servicios continuos, a otros nodos, a otras redes o, a otros servicios y aplicaciones. En este contexto, el objetivo principal de esta tesis doctoral se centra en el estudio del problema de conectividad IoT, más concretamente en el desarrollo de modelos para el análisis y gestión de la Resiliencia, llevado a la práctica a través de las redes WSN, con el fin de mejorar la capacidad la tolerancia a fallos de los nodos que componen la red. Este reto se aborda teniendo en cuenta dos enfoques distintos, por una parte, a diferencia de otro tipo de redes de dispositivos convencionales, los nodos en una red IoT son propensos a perder la conexión, debido a que se despliegan en entornos aislados, o en entornos con condiciones extremas; por otra parte, los nodos suelen ser recursos con bajas capacidades en términos de procesamiento, almacenamiento y batería, entre otros, por lo que requiere que el diseño de la gestión de su resiliencia sea ligero, distribuido y energéticamente eficiente. En este sentido, esta tesis desarrolla técnicas auto-adaptativas que permiten a una red IoT, desde la perspectiva del control de su topología, ser resiliente ante fallos en sus nodos. Para ello, se utilizan técnicas basadas en lógica difusa y técnicas de control proporcional, integral y derivativa (PID - "proportional-integral-derivative"), con el objeto de mejorar la conectividad de la red, teniendo en cuenta que el consumo de energía debe preservarse tanto como sea posible. De igual manera, se ha tenido en cuenta que el algoritmo de control debe ser distribuido debido a que, en general, los enfoques centralizados no suelen ser factibles a despliegues a gran escala. El presente trabajo de tesis implica varios retos que conciernen a la conectividad de red, entre los que se incluyen: la creación y el análisis de modelos matemáticos que describan la red, una propuesta de sistema de control auto-adaptativo en respuesta a fallos en los nodos, la optimización de los parámetros del sistema de control, la validación mediante una implementación siguiendo un enfoque de ingeniería del software y finalmente la evaluación en una aplicación real. Atendiendo a los retos anteriormente mencionados, el presente trabajo justifica, mediante una análisis matemático, la relación existente entre el "grado de un nodo" (definido como el número de nodos en la vecindad del nodo en cuestión) y la conectividad de la red, y prueba la eficacia de varios tipos de controladores que permiten ajustar la potencia de trasmisión de los nodos de red en respuesta a eventuales fallos, teniendo en cuenta el consumo de energía como parte de los objetivos de control. Así mismo, este trabajo realiza una evaluación y comparación con otros algoritmos representativos; en donde se demuestra que el enfoque desarrollado es más tolerante a fallos aleatorios en los nodos de la red, así como en su eficiencia energética. Adicionalmente, el uso de algoritmos bioinspirados ha permitido la optimización de los parámetros de control de redes dinámicas de gran tamaño. Con respecto a la implementación en un sistema real, se han integrado las propuestas de esta tesis en un modelo de programación OSGi ("Open Services Gateway Initiative") con el objeto de crear un middleware auto-adaptativo que mejore la gestión de la resiliencia, especialmente la reconfiguración en tiempo de ejecución de componentes software cuando se ha producido un fallo. Como conclusión, los resultados de esta tesis doctoral contribuyen a la investigación teórica y, a la aplicación práctica del control resiliente de la topología en redes distribuidas de gran tamaño. Los diseños y algoritmos presentados pueden ser vistos como una prueba novedosa de algunas técnicas para la próxima era de IoT. A continuación, se enuncian de forma resumida las principales contribuciones de esta tesis: (1) Se han analizado matemáticamente propiedades relacionadas con la conectividad de la red. Se estudia, por ejemplo, cómo varía la probabilidad de conexión de la red al modificar el alcance de comunicación de los nodos, así como cuál es el mínimo número de nodos que hay que añadir al sistema desconectado para su re-conexión. (2) Se han propuesto sistemas de control basados en lógica difusa para alcanzar el grado de los nodos deseado, manteniendo la conectividad completa de la red. Se han evaluado diferentes tipos de controladores basados en lógica difusa mediante simulaciones, y los resultados se han comparado con otros algoritmos representativos. (3) Se ha investigado más a fondo, dando un enfoque más simple y aplicable, el sistema de control de doble bucle, y sus parámetros de control se han optimizado empleando algoritmos heurísticos como el método de la entropía cruzada (CE, "Cross Entropy"), la optimización por enjambre de partículas (PSO, "Particle Swarm Optimization"), y la evolución diferencial (DE, "Differential Evolution"). (4) Se han evaluado mediante simulación, la mayoría de los diseños aquí presentados; además, parte de los trabajos se han implementado y validado en una aplicación real combinando técnicas de software auto-adaptativo, como por ejemplo las de una arquitectura orientada a servicios (SOA, "Service-Oriented Architecture"). ABSTRACT The advent of the Internet of Things (IoT) enables a tremendous number of applications, such as forest monitoring, disaster management, home automation, factory automation, smart city, etc. However, various kinds of unexpected disturbances may cause node failure in the IoT, for example battery depletion, software/hardware malfunction issues and malicious attacks. So, it can be considered that the IoT is prone to failure. The ability of the network to recover from unexpected internal and external failures is known as "resilience" of the network. Resilience usually serves as an important non-functional requirement when designing IoT, which can further be broken down into "self-*" properties, such as self-adaptive, self-healing, self-configuring, self-optimization, etc. One of the consequences that node failure brings to the IoT is that some nodes may be disconnected from others, such that they are not capable of providing continuous services for other nodes, networks, and applications. In this sense, the main objective of this dissertation focuses on the IoT connectivity problem. A network is regarded as connected if any pair of different nodes can communicate with each other either directly or via a limited number of intermediate nodes. More specifically, this thesis focuses on the development of models for analysis and management of resilience, implemented through the Wireless Sensor Networks (WSNs), which is a challenging task. On the one hand, unlike other conventional network devices, nodes in the IoT are more likely to be disconnected from each other due to their deployment in a hostile or isolated environment. On the other hand, nodes are resource-constrained in terms of limited processing capability, storage and battery capacity, which requires that the design of the resilience management for IoT has to be lightweight, distributed and energy-efficient. In this context, the thesis presents self-adaptive techniques for IoT, with the aim of making the IoT resilient against node failures from the network topology control point of view. The fuzzy-logic and proportional-integral-derivative (PID) control techniques are leveraged to improve the network connectivity of the IoT in response to node failures, meanwhile taking into consideration that energy consumption must be preserved as much as possible. The control algorithm itself is designed to be distributed, because the centralized approaches are usually not feasible in large scale IoT deployments. The thesis involves various aspects concerning network connectivity, including: creation and analysis of mathematical models describing the network, proposing self-adaptive control systems in response to node failures, control system parameter optimization, implementation using the software engineering approach, and evaluation in a real application. This thesis also justifies the relations between the "node degree" (the number of neighbor(s) of a node) and network connectivity through mathematic analysis, and proves the effectiveness of various types of controllers that can adjust power transmission of the IoT nodes in response to node failures. The controllers also take into consideration the energy consumption as part of the control goals. The evaluation is performed and comparison is made with other representative algorithms. The simulation results show that the proposals in this thesis can tolerate more random node failures and save more energy when compared with those representative algorithms. Additionally, the simulations demonstrate that the use of the bio-inspired algorithms allows optimizing the parameters of the controller. With respect to the implementation in a real system, the programming model called OSGi (Open Service Gateway Initiative) is integrated with the proposals in order to create a self-adaptive middleware, especially reconfiguring the software components at runtime when failures occur. The outcomes of this thesis contribute to theoretic research and practical applications of resilient topology control for large and distributed networks. The presented controller designs and optimization algorithms can be viewed as novel trials of the control and optimization techniques for the coming era of the IoT. The contributions of this thesis can be summarized as follows: (1) Mathematically, the fault-tolerant probability of a large-scale stochastic network is analyzed. It is studied how the probability of network connectivity depends on the communication range of the nodes, and what is the minimum number of neighbors to be added for network re-connection. (2) A fuzzy-logic control system is proposed, which obtains the desired node degree and in turn maintains the network connectivity when it is subject to node failures. There are different types of fuzzy-logic controllers evaluated by simulations, and the results demonstrate the improvement of fault-tolerant capability as compared to some other representative algorithms. (3) A simpler but more applicable approach, the two-loop control system is further investigated, and its control parameters are optimized by using some heuristic algorithms such as Cross Entropy (CE), Particle Swarm Optimization (PSO), and Differential Evolution (DE). (4) Most of the designs are evaluated by means of simulations, but part of the proposals are implemented and tested in a real-world application by combining the self-adaptive software technique and the control algorithms which are presented in this thesis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Due to the significant increase of population and their natural desire of improving their standard of living, usage of energy extracted from world commodities, especially shaped as electricity, has increased in an intense manner during the last decades. This fact brings up a challenge with a complicated solution, which is how to guarantee that there will be enough energy so as to satisfy the energy demand of the world population. Among all the possible solutions that can be adopted to mitigate this problem one of them is almost of mandatory adoption, which consists of rationalizing energy utilization, in a way that its wasteful usage is minimized and it can be leveraged during a longer period of time. One of the ways to achieve it is by means of the improvement of the power distribution grid, so that it will be able to react in a more efficient manner against common issues, such as energy demand peaks or inaccurate electricity consumption forecasts. However, in order to be able to implement this improvement it is necessary to use technologies from the ICT (Information and Communication Technologies) sphere that often present challenges in some key areas: advanced metering infrastructure integration, interoperability and interconnectivity of the devices, interfaces to offer the applications, security measures design, etc. All these challenges may imply slowing down the adoption of the smart grid as a system to prolong the lifespan and utilization of the available energy. A proposal for an intermediation architecture that will make possible solving these challenges is put forward in this Master Thesis. Besides, one implementation and the tests that have been carried out to know the performance of the presented concepts have been included as well, in a way that it can be proved that the challenges set out by the smart grid can be resolved. RESUMEN. Debido al incremento significativo de la población y su deseo natural de mejorar su nivel de vida, la utilización de la energía extraída de las materias primas mundiales, especialmente en forma de electricidad, ha aumentado de manera intensa durante las últimas décadas. Este hecho plantea un reto de solución complicada, el cual es cómo garantizar que se dispondrá de la energía suficiente como para satisfacer la demanda energética de la población mundial. De entre todas las soluciones posibles que se pueden adoptar para mitigar este problema una de ellas es de casi obligatoria adopción, la cual consiste en racionalizar la utilización de la energía, de tal forma que se minimice su malgasto y pueda aprovecharse durante más tiempo. Una de las maneras de conseguirlo es mediante la mejora de la red de distribución de electricidad para que ésta pueda reaccionar de manera más eficaz contra problemas comunes, tales como los picos de demanda de energía o previsiones imprecisas acerca del consumo de electricidad. Sin embargo, para poder implementar esta mejora es necesario utilizar tecnologías del ámbito de las TIC (Tecnologías de la Información y la Comunicación) que a menudo presentan problemas en algunas áreas clave: integración de infraestructura de medición avanzada, interoperabilidad e interconectividad de los dispositivos, interfaces que ofrecer a las aplicaciones, diseño de medidas de seguridad, etc. Todos estos retos pueden implicar una ralentización en la adopción de la red eléctrica inteligente como un sistema para alargar la vida y la utilización de la energía disponible. En este Trabajo Fin de Máster se sugiere una propuesta para una arquitectura de intermediación que posibilite la resolución de estos retos. Además, una implementación y las pruebas que se han llevado a cabo para conocer el rendimiento de los conceptos presentados también han sido incluidas, de tal forma que se demuestre que los retos que plantea la red eléctrica inteligente pueden ser solventados.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Los Pabellones de las Exposiciones Universales suelen considerarse dentro de las arquitecturas efímeras, pero habría que puntualizar que toda construcción tiene su tiempo y su periodo de extinción pudiendo ser éstos indefinidos, lo permanente en lo efímero. Muchas de las obras míticas del siglo XX existieron sólo durante unos meses, en escenarios efímeros, modificando el curso de la arquitectura con unas pocas imágenes, lo que llevaría a cuestionar si las circunstancias por las que no han sobrevivido o lo han hecho en circunstancias poco comunes, no se deben tanto a una condición efímera sino a su carácter experimental. Determinadas Exposiciones Universales fueron plataforma para que los pabellones, hitos con los que se ha construido una parte significativa de la Historia de la Arquitectura contemporánea, pasaran a convertirse en mitos, por su distancia en el tiempo, porque ya no existen y porque a veces de ellos sólo nos queda una anticuada y limitada imaginería. Las diversas Historias de la Arquitectura ponen de manifiesto la importancia de algunos pabellones y el papel que ejercieron, ejercen y ejercerán algunos de los construidos para determinadas Exposiciones Universales, pues son el testimonio de que se mantienen vivos, permaneciendo en el tiempo, desempeñando cada uno una función, bien de base para nuevos avances tecnológicos o constructivos, bien para experimentar nuevas formas de habitar, bien para educar, bien para encumbrar a sus autores hasta entonces apenas conocidos. Tanto los que se han mantenido en pie, como los que han sido trasladados y reconstruidos en un nuevo emplazamiento, o incluso los que siguieron su destino fatal y se convirtieron en arquitecturas ausentes, pero que por lo que supusieron de innovación y experimentación, todos han permanecido vivos en la arquitectura de hoy en día. Esta tesis estudia el conjunto de factores que contribuyeron a conferirles esa condición de hito, qué tipo de publicaciones hablan de ellos, en qué términos se tratan y en qué medida los relacionan con la producción de la época y/o de su autor, qué aspectos destacan, cuáles son los valores icónicos que se han ido estableciendo con el paso del tiempo…Qué es lo que permanece. Por otra parte, también aborda en qué medida su condición de construcción efímera, y gracias a su inherente necesidad de desaparecer físicamente, favoreciendo su ausencia en el recuerdo, lo que los ha dotado de representatividad. Esto podría resultar hoy en día algo contradictorio, dado el gran valor concedido a la imagen en la sociedad actual hasta el punto de convertirse en un componente esencial de la representatividad: la imagen sustituye al recuerdo pareciendo que lo que carezca de manifestación física en realidad no existiera, hasta llegar a hacerle perder toda capacidad de representación. Sin embargo, y considerando la imagen como elemento esencial de lo icónico, la reconstrucción de los pabellones una vez concluidas las exposiciones, en muchos casos no ha hecho más que potenciar su valor como arquitecturas efímeras, ya que desposeídos de su carácter temporal, los pabellones de las exposiciones pierden su razón de ser. El Pabellón de España de Corrales y Molezún para la EXPO Bruselas’58 es un claro ejemplo de ello, como se mostrará en el desarrollo de la tesis. En la tesis se exponen los distintos casos de los pabellones elegidos, rastreando, fundamentalmente en las publicaciones periódicas, el papel que en cada uno de ellos ejerció su destino final que, a pesar de no ser el objetivo o fin de la presente tesis, sí podría haber contribuido en algunos casos a dotarle de esa categoría de hito en la historia de la arquitectura. Se trata en definitiva de rastrear las vicisitudes que los han conducido a su condición de referentes arquitectónicos, de hitos de la Historia de la Arquitectura. El estudio se centra en Pabellones de las Exposiciones Universales de Bruselas’58, Montreal’67 y Osaka’70 por dos motivos fundamentales: el primero, su catalogación por el Bureau International des Expositions (BIE) como Exposiciones Universales de 1ª categoría; y el segundo, el período en el que se celebraron, período comprendido entre los años 1945 a 1970, años de profundos y determinantes cambios en la arquitectura y en los que tiene lugar el desarrollo y posterior revisión de la modernidad tras la 2ª Guerra Mundial. Se analiza la trayectoria bibliográfica de los pabellones más nombrados de estas tres Exposiciones Universales que son: de Bruselas ’58, el Pabellón de la República Federal de Alemania, de Egon Eiermann y Sep Ruf; el Pabellón Philips de Le Corbusier, y el Pabellón de España, de José Antonio Corrales y Ramón Molezún; de Montreal ’67, el Pabellón de la República Federal de Alemania, de Frei Otto, y el Pabellón de Estados Unidos, de Richard Buckminster Fuller; y de Osaka ’70, el Theme Pavilion, de Kenzo Tange, el Takara Beautilion, de Kisho Kurokawa, y el Pabellón del Grupo Fuji, de Yutaka Murata. Mediante el análisis se detecta que, ya en las revistas coetáneas a las exposiciones, estos pabellones se señalaban como edificios importantes para la historia de la arquitectura futura. Hecho que se constata con la aparición de los mismos en las historias, incluso en las más recientes, lo que demuestra su condición de hitos en la Historia de la Arquitectura ya consolidada. ABSTRACT Pavilions of the Universal Exhibitions are often considered as ephemeral architecture. However it is worth mentioning that every construction has its time and its extinction period and both of them could be indefinite/infinite, the permanent in the ephemeral. Many of the iconic works of the twentieth century lasted only for a few months, in ephemeral scenarios, changing the course of architecture but not with many images. This leads to question whether their survival under special circumstances or their extinction is mainly due to their experimental nature, and not so much to their ephemeral condition. Pavilions are at the basis of a significant part of the history of contemporary architecture. Specific Universal Exhibitions served as platforms for these landmarks to become myths, be it because of their endurance, or because they no longer exist, or even because in some cases we only have a limited and outdated imagery of them. The different Histories of Architecture highlight the importance of some pavilions and the influence they have had, have and will have some of those that were built for particular Universal Exhibitions. They are a live testimony, lasting over time, playing a specific role as basis for new technological or constructive breakthroughs; to experience new ways of living; or to educate or to raise the profile of their authors hitherto little known. Thanks to their experimental or innovative approach, some pavilions enduring overtime or that have been moved and rebuilt in a new location, or even those that followed their fate and became absent architectures, are still alive in today’s architecture. This thesis analyses the set of elements that contributed to confer the status of landmark to pavilions: what kind of publications speak of them; how they are referred to and the extent to which they are linked to their contemporary production time and / or to their author; what are elements that make them stand out; what are the iconic values that have been established as time goes by and what are those that are still valid…What is it that remains. It also assesses to what extend the condition of pavilion constructions is ephemeral. And finally, what confers them representativeness, giving their inherent need to physically disappear, favoring their absence in the memory. Today this may result somewhat contradictory as the high value of images in contemporary society has made them an essential component of representativeness. They replace remembrances to the point that it seems that what lacks physical manifestation doesn’t exist anymore, and therefore loses representation capacity. However, and considering images as an essential element of what is iconic, in most cases the reconstruction of pavilions upon completion of the exhibitions has leveraged their value as ephemeral architectures; although once deprived of their temporary character, they would lose their reason to exist. The Pavilion of Spain Corrales and Molezún for the Brusels'58 EXPO is a clear example of this, as described in the development of this document. This thesis explores the case of specific pavilions and assesses the role each one had in their final destination, by mainly tracking them in regular publications. Even though the latter is not the objective or the purpose of this thesis, the final destination of these pavilions may have contributed in some cases to grant them their landmark status in the history of architecture. Actually, this thesis is about tracking the events that have led to grant these pavilions their condition as architectural references, as landmark in the history of architecture. The study focuses on pavilions of the Universal Exhibition Brussels'58, Montreal'67 and Osaka'70 for two main reasons: first, their classification by the Bureau International des Expositions (BIE) and Universal Exhibitions 1st category; and second, the period in which they were held, from 1945 to 1970, a time of profound and decisive changes in the architecture and in the development and subsequent revision of modernity after the II World. It analyzes the bibliographic path of the most cited pavilions in the three Universal Exhibitions: in Brussels '58, the pavilion of the RFA by Egon Eiermann and Sep Rup, the pavilion of Philips by Le Corbusier and the Spain pavilion from José Antonio Corrales and Ramón Molezún; in Montreal '67 the pavilion of RFA by Frei Otto and the United States pavilion by Richard Buckminster Fuller; and in Osaka '70, the Theme Pavilion by Kenzo Tange, the Takara Beautilion by Kisho Kurokawa and the Fuji Group pavilion by Yutaka Murata. Through the analysis it is noticeable that in the contemporary publications to the exhibitions, these pavilions were already signaled out as relevant buildings to the future architecture history. The fact that they became part of the history themselves, even in the most recent times, is a prove of their condition as milestones of the consolidated History of Architecture.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

No Brasil nasce uma criança com Síndrome de Down (SD) a cada 600 nascimentos, o que representa aproximadamente 8.000 bebês com SD por ano. As peculiaridades no desenvolvimento dessas crianças exigem que os pais desenvolvam habilidades especiais para contemplarem cada necessidade diferenciada da criança que poderia passar despercebida ou facilmente captada nas crianças sem nenhum tipo de Síndrome. A interação com os pais, agentes primordiais nesse processo, é essencial, inclusive, para minimizar os efeitos da Síndrome; porém pouco se tem estudado sobre a vivência dos cuidadores no encontro com a criança. Nesse contexto, objetivou-se compreender como se deu a construção do \"ser pai/mãe\" de uma criança com Síndrome de Down, desde o diagnóstico da Síndrome de Down até o momento da entrevista. Para tanto se utilizou o método clínico-qualitativo, através do estudo de caso coletivo. Como referencial teórico para análise a psicanálise winnicottiana. Realizou-se entrevistas semiabertas, individuais, face a face, com 5 casais de pais de crianças com Síndrome de Down, com idade de 7 a 10 anos. As entrevistas foram audiogravadas e transcritas na íntegra. Os resultados foram apresentados através de quatro categorias, a saber: \"Amor a segunda vista\" aborda o processo interativo inicial, os pais relatam o choque ao receber a notícia da Síndrome e os desafios na readaptação dos sonhos e expectativas. \"O ambiente lugar e não lugar\" descreve como os pais perceberam os diversos ambientes, alguns hostis que não contribuíram para que os mesmos pudessem ser acolhidos e potencializados na tarefa de cuidar desse filho, ressaltando que a ausência de suporte acarreta em sobrecarga na percepção dos pais; Por outro lado, consideram que o maior suporte que tiveram foi do parceiro, o que auxiliou na aceitação da notícia e em encontrar possibilidades de cuidado. \"Encontro Suficientemente Bom\" coloca em relevo a descrição dos participantes de que há maneiras diferentes de ajustar o cuidado na interação com seus filhos que perpassaram tanto por incômodos, quanto pela possibilidade do gesto criativo que se apresenta em atividades triviais e importantes do desenvolvimento. \"Trans-formações\" destaca às mudanças que os pais vivenciam ao poder se aproximar do filho \"real\", assumindo novos papéis, transformando-se através da abertura ao novo do outro e de si mesmos. A partir desse estudo pôde-se compreender que a relação vai se constituindo e se regulando reciprocamente, os cuidados precisam ser ajustados à demanda e possibilidade do outro. Compreendeu-se, ainda, que criatividade é a característica que permite que os pais sejam espontâneoss e recontruam significados e modos de interagir pessoais com seus filhos. Os pais entrevistados indicam que quanto mais lento e exigente o cuidado com seus filhos com Síndrome de Down, mais possibilidades de encontros surgem, e quando esses podem ser suficientemente bons, são \"trans-formadores\" para ambos: pais e filho. Ampliou-se a compreensão quanto a necessidade de acolhimento às angústias vividas, e suporte para o processo da construção da parentalidade.