868 resultados para design quality
Resumo:
Recently in most of the industrial automation process an ever increasing degree of automation has been observed. This increasing is motivated by the higher requirement of systems with great performance in terms of quality of products/services generated, productivity, efficiency and low costs in the design, realization and maintenance. This trend in the growth of complex automation systems is rapidly spreading over automated manufacturing systems (AMS), where the integration of the mechanical and electronic technology, typical of the Mechatronics, is merging with other technologies such as Informatics and the communication networks. An AMS is a very complex system that can be thought constituted by a set of flexible working stations, one or more transportation systems. To understand how this machine are important in our society let considerate that every day most of us use bottles of water or soda, buy product in box like food or cigarets and so on. Another important consideration from its complexity derive from the fact that the the consortium of machine producers has estimated around 350 types of manufacturing machine. A large number of manufacturing machine industry are presented in Italy and notably packaging machine industry,in particular a great concentration of this kind of industry is located in Bologna area; for this reason the Bologna area is called “packaging valley”. Usually, the various parts of the AMS interact among them in a concurrent and asynchronous way, and coordinate the parts of the machine to obtain a desiderated overall behaviour is an hard task. Often, this is the case in large scale systems, organized in a modular and distributed manner. Even if the success of a modern AMS from a functional and behavioural point of view is still to attribute to the design choices operated in the definition of the mechanical structure and electrical electronic architecture, the system that governs the control of the plant is becoming crucial, because of the large number of duties associated to it. Apart from the activity inherent to the automation of themachine cycles, the supervisory system is called to perform other main functions such as: emulating the behaviour of traditional mechanical members thus allowing a drastic constructive simplification of the machine and a crucial functional flexibility; dynamically adapting the control strategies according to the different productive needs and to the different operational scenarios; obtaining a high quality of the final product through the verification of the correctness of the processing; addressing the operator devoted to themachine to promptly and carefully take the actions devoted to establish or restore the optimal operating conditions; managing in real time information on diagnostics, as a support of the maintenance operations of the machine. The kind of facilities that designers can directly find on themarket, in terms of software component libraries provides in fact an adequate support as regard the implementation of either top-level or bottom-level functionalities, typically pertaining to the domains of user-friendly HMIs, closed-loop regulation and motion control, fieldbus-based interconnection of remote smart devices. What is still lacking is a reference framework comprising a comprehensive set of highly reusable logic control components that, focussing on the cross-cutting functionalities characterizing the automation domain, may help the designers in the process of modelling and structuring their applications according to the specific needs. Historically, the design and verification process for complex automated industrial systems is performed in empirical way, without a clear distinction between functional and technological-implementation concepts and without a systematic method to organically deal with the complete system. Traditionally, in the field of analog and digital control design and verification through formal and simulation tools have been adopted since a long time ago, at least for multivariable and/or nonlinear controllers for complex time-driven dynamics as in the fields of vehicles, aircrafts, robots, electric drives and complex power electronics equipments. Moving to the field of logic control, typical for industrial manufacturing automation, the design and verification process is approached in a completely different way, usually very “unstructured”. No clear distinction between functions and implementations, between functional architectures and technological architectures and platforms is considered. Probably this difference is due to the different “dynamical framework”of logic control with respect to analog/digital control. As a matter of facts, in logic control discrete-events dynamics replace time-driven dynamics; hence most of the formal and mathematical tools of analog/digital control cannot be directly migrated to logic control to enlighten the distinction between functions and implementations. In addition, in the common view of application technicians, logic control design is strictly connected to the adopted implementation technology (relays in the past, software nowadays), leading again to a deep confusion among functional view and technological view. In Industrial automation software engineering, concepts as modularity, encapsulation, composability and reusability are strongly emphasized and profitably realized in the so-calledobject-oriented methodologies. Industrial automation is receiving lately this approach, as testified by some IEC standards IEC 611313, IEC 61499 which have been considered in commercial products only recently. On the other hand, in the scientific and technical literature many contributions have been already proposed to establish a suitable modelling framework for industrial automation. During last years it was possible to note a considerable growth in the exploitation of innovative concepts and technologies from ICT world in industrial automation systems. For what concerns the logic control design, Model Based Design (MBD) is being imported in industrial automation from software engineering field. Another key-point in industrial automated systems is the growth of requirements in terms of availability, reliability and safety for technological systems. In other words, the control system should not only deal with the nominal behaviour, but should also deal with other important duties, such as diagnosis and faults isolations, recovery and safety management. Indeed, together with high performance, in complex systems fault occurrences increase. This is a consequence of the fact that, as it typically occurs in reliable mechatronic systems, in complex systems such as AMS, together with reliable mechanical elements, an increasing number of electronic devices are also present, that are more vulnerable by their own nature. The diagnosis problem and the faults isolation in a generic dynamical system consists in the design of an elaboration unit that, appropriately processing the inputs and outputs of the dynamical system, is also capable of detecting incipient faults on the plant devices, reconfiguring the control system so as to guarantee satisfactory performance. The designer should be able to formally verify the product, certifying that, in its final implementation, it will perform itsrequired function guarantying the desired level of reliability and safety; the next step is that of preventing faults and eventually reconfiguring the control system so that faults are tolerated. On this topic an important improvement to formal verification of logic control, fault diagnosis and fault tolerant control results derive from Discrete Event Systems theory. The aimof this work is to define a design pattern and a control architecture to help the designer of control logic in industrial automated systems. The work starts with a brief discussion on main characteristics and description of industrial automated systems on Chapter 1. In Chapter 2 a survey on the state of the software engineering paradigm applied to industrial automation is discussed. Chapter 3 presentes a architecture for industrial automated systems based on the new concept of Generalized Actuator showing its benefits, while in Chapter 4 this architecture is refined using a novel entity, the Generalized Device in order to have a better reusability and modularity of the control logic. In Chapter 5 a new approach will be present based on Discrete Event Systems for the problemof software formal verification and an active fault tolerant control architecture using online diagnostic. Finally conclusive remarks and some ideas on new directions to explore are given. In Appendix A are briefly reported some concepts and results about Discrete Event Systems which should help the reader in understanding some crucial points in chapter 5; while in Appendix B an overview on the experimental testbed of the Laboratory of Automation of University of Bologna, is reported to validated the approach presented in chapter 3, chapter 4 and chapter 5. In Appendix C some components model used in chapter 5 for formal verification are reported.
Resumo:
Skype is one of the well-known applications that has guided the evolution of real-time video streaming and has become one of the most used software in everyday life. It provides VoIP audio/video calls as well as messaging chat and file transfer. Many versions are available covering all the principal operating systems like Windows, Macintosh and Linux but also mobile systems. Voice quality decreed Skype success since its birth in 2003 and peer-to-peer architecture has allowed worldwide diffusion. After video call introduction in 2006 Skype became a complete solution to communicate between two or more people. As a primarily video conferencing application, Skype assumes certain characteristics of the delivered video to optimize its perceived quality. However in the last years, and with the recent release of SkypeKit1, many new Skype video-enabled devices came out especially in the mobile world. This forced a change to the traditional recording, streaming and receiving settings allowing for a wide range of network and content dynamics. Video calls are not anymore based on static ‘chatting’ but mobile devices have opened new possibilities and can be used in several scenarios. For instance, lecture streaming or one-to-one mobile video conferences exhibit more dynamics as both caller and callee might be on move. Most of these cases are different from “head&shoulder” only content. Therefore, Skype needs to optimize its video streaming engine to cover more video types. Heterogeneous connections require different behaviors and solutions and Skype must face with this variety to maintain a certain quality independently from connection used. Part of the present work will be focused on analyzing Skype behavior depending on video content. Since Skype protocol is proprietary most of the studies so far have tried to characterize its traffic and to reverse engineer its protocol. However, questions related to the behavior of Skype, especially on quality as perceived by users, remain unanswered. We will study Skype video codecs capabilities and video quality assessment. Another motivation of our work is the design of a mechanism that estimates the perceived cost of network conditions on Skype video delivery. To this extent we will try to assess in an objective way the impact of network impairments on the perceived quality of a Skype video call. Traditional video streaming schemes lack the necessary flexibility and adaptivity that Skype tries to achieve at the edge of a network. Our contribution will lye on a testbed and consequent objective video quality analysis that we will carry out on input videos. We will stream raw video files with Skype via an impaired channel and then we will record it at the receiver side to analyze with objective quality of experience metrics.
Resumo:
The field of "computer security" is often considered something in between Art and Science. This is partly due to the lack of widely agreed and standardized methodologies to evaluate the degree of the security of a system. This dissertation intends to contribute to this area by investigating the most common security testing strategies applied nowadays and by proposing an enhanced methodology that may be effectively applied to different threat scenarios with the same degree of effectiveness. Security testing methodologies are the first step towards standardized security evaluation processes and understanding of how the security threats evolve over time. This dissertation analyzes some of the most used identifying differences and commonalities, useful to compare them and assess their quality. The dissertation then proposes a new enhanced methodology built by keeping the best of every analyzed methodology. The designed methodology is tested over different systems with very effective results, which is the main evidence that it could really be applied in practical cases. Most of the dissertation discusses and proves how the presented testing methodology could be applied to such different systems and even to evade security measures by inverting goals and scopes. Real cases are often hard to find in methodology' documents, in contrary this dissertation wants to show real and practical cases offering technical details about how to apply it. Electronic voting systems are the first field test considered, and Pvote and Scantegrity are the two tested electronic voting systems. The usability and effectiveness of the designed methodology for electronic voting systems is proved thanks to this field cases analysis. Furthermore reputation and anti virus engines have also be analyzed with similar results. The dissertation concludes by presenting some general guidelines to build a coordination-based approach of electronic voting systems to improve the security without decreasing the system modularity.
Resumo:
Heavy pig breeding in Italy is mainly oriented for the production of high quality processed products. Of particular importance is the dry cured ham production, which is strictly regulated and requires specific carcass characteristics correlated with green leg characteristics. Furthermore, as pigs are slaughtered at about 160 kg live weight, the Italian pig breeding sector faces severe problems of production efficiency that are related to all biological aspects linked to growth, feed conversion, fat deposition and so on. It is well known that production and carcass traits are in part genetically determined. Therefore, as a first step to understand genetic basis of traits that could have a direct or indirect impact on dry cured ham production, a candidate gene approach can be used to identify DNA markers associated with parameters of economic importance. In this thesis, we investigated three candidate genes for carcass and production traits (TRIB3, PCSK1, MUC4) in pig breeds used for dry cured ham production, using different experimental approaches in order to find molecular markers associated with these parameters.
Resumo:
The continuous advancements and enhancements of wireless systems are enabling new compelling scenarios where mobile services can adapt according to the current execution context, represented by the computational resources available at the local device, current physical location, people in physical proximity, and so forth. Such services called context-aware require the timely delivery of all relevant information describing the current context, and that introduces several unsolved complexities, spanning from low-level context data transmission up to context data storage and replication into the mobile system. In addition, to ensure correct and scalable context provisioning, it is crucial to integrate and interoperate with different wireless technologies (WiFi, Bluetooth, etc.) and modes (infrastructure-based and ad-hoc), and to use decentralized solutions to store and replicate context data on mobile devices. These challenges call for novel middleware solutions, here called Context Data Distribution Infrastructures (CDDIs), capable of delivering relevant context data to mobile devices, while hiding all the issues introduced by data distribution in heterogeneous and large-scale mobile settings. This dissertation thoroughly analyzes CDDIs for mobile systems, with the main goal of achieving a holistic approach to the design of such type of middleware solutions. We discuss the main functions needed by context data distribution in large mobile systems, and we claim the precise definition and clean respect of quality-based contracts between context consumers and CDDI to reconfigure main middleware components at runtime. We present the design and the implementation of our proposals, both in simulation-based and in real-world scenarios, along with an extensive evaluation that confirms the technical soundness of proposed CDDI solutions. Finally, we consider three highly heterogeneous scenarios, namely disaster areas, smart campuses, and smart cities, to better remark the wide technical validity of our analysis and solutions under different network deployments and quality constraints.
Resumo:
In the past centuries and before the invention of automobile, roads consisted mainly of unpaved paths connecting only few cities. Later, in the beginning of the twentieth century, the automobile was introduced and a new type of the transportation system was born. Therefore, it was necessary to change the condition of roads to fit with the automobiles. With the spread and the development of the automobiles, roads also have developed and increased all over the world. That caused negative effects on the environment and humans’ life quality. Thus, highways associations and communities had to take some steps to reduce these effects and care about environmental and cultural issues with the traditional commitment to safety and mobility, and that is known as context sensitive design. The aim of this thesis is to use the concepts of context sensitive design to reduce the negative environmental impacts of provincial road Galliera, which connects via Colombo in city of Bologna to provincial road 3 in Argelato city. Some solutions were proposed in this thesis to reduce traffic noise, fragmentation, fauna mortality and to improve the aesthetics of the road.
Resumo:
Ziel dieser Arbeit ist die Untersuchung der Einflüsse von Blister-Design und Folienqualität auf die Funktionalität von Blisterverpackungen. Hierzu werden analytische Methoden mittels Interferometrie, IR-Spektroskopie, Betarückstreuverfahren, Wirbelstromverfahren und Impedanzspektroskopie entwickelt, die zur quantitativen Bestimmung von Heißsiegellacken und Laminatbeschichtungen von Aluminium-Blisterfolien geeignet sind. Ein Vergleich der Methoden zeigt, dass sich das Betarückstreuverfahren, die Interferometrie und IR-Messungen für die Heißsiegellackbestimmung, die Interferometrie und das Wirbelstromverfahren für die Bestimmung von Kunststofflaminaten eignen.rnIm zweiten Abschnitt der Arbeit werden Einflüsse des Heißsiegellack-Flächengewichtes von Deckfolien auf die Qualität von Blisterverpackungen untersucht. Mit Zunahme des Flächengewichtes zeigt sich eine Erhöhung der Siegelnahtfestigkeit aber auch der Wasserdampfdurchlässigkeit von Blistern. Die untersuchten Heißsiegellacke zeigen Permeationskoeffizienten vergleichbar mit Polyvinylchlorid. In Untersuchungen zur Siegelprozessvalidität zeigt das Heißsiegellack-Flächengewicht nur geringfügige Auswirkungen auf diese. rnIm dritten Abschnitt der Arbeit werden Einflüsse des Blister-Designs auf die Benutzerfreundlichkeit von Blisterverpackungen durch eine Handlingstudie untersucht. Variationen der Öffnungskräfte von Durchdrück-Blistern wirken sich deutlich auf die Bewertungen der Blister durch die Probanden aus. Während die meisten Probanden alle getesteten Durchdrück-Blister innerhalb der Testdauer von 4 Minuten öffnen können (>84%), treten beim Peel-Blister und Peel-off-push-through-Blister deutlich mehr Handlingprobleme auf. Die Handlingprobleme korrelieren mit dem Alter, der Lebenssituation, der gesundheitlichen Verfassung und der Sehfähigkeit der Probanden. rn
Resumo:
Atmosphärische Aerosolpartikel wirken in vielerlei Hinsicht auf die Menschen und die Umwelt ein. Eine genaue Charakterisierung der Partikel hilft deren Wirken zu verstehen und dessen Folgen einzuschätzen. Partikel können hinsichtlich ihrer Größe, ihrer Form und ihrer chemischen Zusammensetzung charakterisiert werden. Mit der Laserablationsmassenspektrometrie ist es möglich die Größe und die chemische Zusammensetzung einzelner Aerosolpartikel zu bestimmen. Im Rahmen dieser Arbeit wurde das SPLAT (Single Particle Laser Ablation Time-of-flight mass spectrometer) zur besseren Analyse insbesondere von atmosphärischen Aerosolpartikeln weiterentwickelt. Der Aerosoleinlass wurde dahingehend optimiert, einen möglichst weiten Partikelgrößenbereich (80 nm - 3 µm) in das SPLAT zu transferieren und zu einem feinen Strahl zu bündeln. Eine neue Beschreibung für die Beziehung der Partikelgröße zu ihrer Geschwindigkeit im Vakuum wurde gefunden. Die Justage des Einlasses wurde mithilfe von Schrittmotoren automatisiert. Die optische Detektion der Partikel wurde so verbessert, dass Partikel mit einer Größe < 100 nm erfasst werden können. Aufbauend auf der optischen Detektion und der automatischen Verkippung des Einlasses wurde eine neue Methode zur Charakterisierung des Partikelstrahls entwickelt. Die Steuerelektronik des SPLAT wurde verbessert, so dass die maximale Analysefrequenz nur durch den Ablationslaser begrenzt wird, der höchsten mit etwa 10 Hz ablatieren kann. Durch eine Optimierung des Vakuumsystems wurde der Ionenverlust im Massenspektrometer um den Faktor 4 verringert.rnrnNeben den hardwareseitigen Weiterentwicklungen des SPLAT bestand ein Großteil dieser Arbeit in der Konzipierung und Implementierung einer Softwarelösung zur Analyse der mit dem SPLAT gewonnenen Rohdaten. CRISP (Concise Retrieval of Information from Single Particles) ist ein auf IGOR PRO (Wavemetrics, USA) aufbauendes Softwarepaket, das die effiziente Auswertung der Einzelpartikel Rohdaten erlaubt. CRISP enthält einen neu entwickelten Algorithmus zur automatischen Massenkalibration jedes einzelnen Massenspektrums, inklusive der Unterdrückung von Rauschen und von Problemen mit Signalen die ein intensives Tailing aufweisen. CRISP stellt Methoden zur automatischen Klassifizierung der Partikel zur Verfügung. Implementiert sind k-means, fuzzy-c-means und eine Form der hierarchischen Einteilung auf Basis eines minimal aufspannenden Baumes. CRISP bietet die Möglichkeit die Daten vorzubehandeln, damit die automatische Einteilung der Partikel schneller abläuft und die Ergebnisse eine höhere Qualität aufweisen. Daneben kann CRISP auf einfache Art und Weise Partikel anhand vorgebener Kriterien sortieren. Die CRISP zugrundeliegende Daten- und Infrastruktur wurde in Hinblick auf Wartung und Erweiterbarkeit erstellt. rnrnIm Rahmen der Arbeit wurde das SPLAT in mehreren Kampagnen erfolgreich eingesetzt und die Fähigkeiten von CRISP konnten anhand der gewonnen Datensätze gezeigt werden.rnrnDas SPLAT ist nun in der Lage effizient im Feldeinsatz zur Charakterisierung des atmosphärischen Aerosols betrieben zu werden, während CRISP eine schnelle und gezielte Auswertung der Daten ermöglicht.
Resumo:
When designing metaheuristic optimization methods, there is a trade-off between application range and effectiveness. For large real-world instances of combinatorial optimization problems out-of-the-box metaheuristics often fail, and optimization methods need to be adapted to the problem at hand. Knowledge about the structure of high-quality solutions can be exploited by introducing a so called bias into one of the components of the metaheuristic used. These problem-specific adaptations allow to increase search performance. This thesis analyzes the characteristics of high-quality solutions for three constrained spanning tree problems: the optimal communication spanning tree problem, the quadratic minimum spanning tree problem and the bounded diameter minimum spanning tree problem. Several relevant tree properties, that should be explored when analyzing a constrained spanning tree problem, are identified. Based on the gained insights on the structure of high-quality solutions, efficient and robust solution approaches are designed for each of the three problems. Experimental studies analyze the performance of the developed approaches compared to the current state-of-the-art.
Resumo:
Resource management is of paramount importance in network scenarios and it is a long-standing and still open issue. Unfortunately, while technology and innovation continue to evolve, our network infrastructure system has been maintained almost in the same shape for decades and this phenomenon is known as “Internet ossification”. Software-Defined Networking (SDN) is an emerging paradigm in computer networking that allows a logically centralized software program to control the behavior of an entire network. This is done by decoupling the network control logic from the underlying physical routers and switches that forward traffic to the selected destination. One mechanism that allows the control plane to communicate with the data plane is OpenFlow. The network operators could write high-level control programs that specify the behavior of an entire network. Moreover, the centralized control makes it possible to define more specific and complex tasks that could involve many network functionalities, e.g., security, resource management and control, into a single framework. Nowadays, the explosive growth of real time applications that require stringent Quality of Service (QoS) guarantees, brings the network programmers to design network protocols that deliver certain performance guarantees. This thesis exploits the use of SDN in conjunction with OpenFlow to manage differentiating network services with an high QoS. Initially, we define a QoS Management and Orchestration architecture that allows us to manage the network in a modular way. Then, we provide a seamless integration between the architecture and the standard SDN paradigm following the separation between the control and data planes. This work is a first step towards the deployment of our proposal in the University of California, Los Angeles (UCLA) campus network with differentiating services and stringent QoS requirements. We also plan to exploit our solution to manage the handoff between different network technologies, e.g., Wi-Fi and WiMAX. Indeed, the model can be run with different parameters, depending on the communication protocol and can provide optimal results to be implemented on the campus network.
Resumo:
OBJECTIVES To compare longitudinal patterns of health care utilization and quality of care for other health conditions between breast cancer-surviving older women and a matched cohort without breast cancer. DESIGN Prospective five-year longitudinal comparison of cases and matched controls. SUBJECTS Newly identified breast cancer patients recruited during 1997–1999 from four geographic regions (Los Angeles, CA; Minnesota; North Carolina; and Rhode Island; N = 422) were matched by age, race, baseline comorbidity and zip code location with up to four non-breast-cancer controls (N = 1,656). OUTCOMES Survival; numbers of hospitalized days and physician visits; total inpatient and outpatient Medicare payments; guideline monitoring for patients with cardiovascular disease and diabetes, and bone density testing and colorectal cancer screening. RESULTS Five-year survival was similar for cases and controls (80% and 82%, respectively; p = 0.18). In the first follow-up year, comorbidity burden and health care utilization were higher for cases (p < 0.01), with most differences diminishing over time. However, the number of physician visits was higher for cases (p < 0.01) in every year, driven partly by more cancer and surgical specialist visits. Cases and controls adhered similarly to recommended bone density testing, and monitoring of cardiovascular disease and diabetes; adherence to recommended colorectal cancer screening was better among cases. CONCLUSION Breast cancer survivors’ health care utilization and disease burden return to pre-diagnosis levels after one year, yet their greater use of outpatient care persists at least five years. Quality of care for other chronic health problems is similar for cases and controls.
Resumo:
Background: After oral tumor resection, structural and functional rehabilitation by means of dental prostheses is complex, and positive treatment outcome is not always predictable. Purpose: The objective of the study was to report on oral rehabilitation and quality of life 2-5 years after resection of malignant oral tumors. Materials and Methods: Data of 46 patients (57 ± 7 years) who underwent oral tumor surgery were available. More than 50% of tumors were classified T3 or T4. Open oro-nasal defects resulted in 12 patients and full mandibulary block resections in 23 patients. Comprehensive planning, implant placement, and prosthetic rehabilitation followed an interdisciplinary protocol. Analysis comprised tumor location, type of prostheses, implant survival, and quality of life. Results: Because of advanced tumor status, resections resulted in marked alteration of the oral anatomy requiring complex treatment procedures. Prosthetic rehabilitation comprised fixed and removable prostheses, with 104 implants placed in 28 patients (60%). Early implant loss was high (13%) and cumulative survival rate of loaded implants was <90% after 5 years. Prosthetic plans had to be modified because of side effects of tumor therapy, complications with implants and tumor recurrence. The majority of patients rated quality of life favorable, but some experienced impaired swallowing, dry mouth, limited mouth opening, appearance, and soreness. Conclusions: Some local effects of tumor therapy could not be significantly improved by prosthetic rehabilitation leading to functional and emotional disability. Many patients had passed away or felt too ill to fill the questionnaires. This case series confirms the complex anatomic alterations after tumor resection and the need for individual treatment approaches especially regarding prosthesis design. In spite of disease-related local and general restrictions, most patients gave a positive assessment of quality of life.
Resumo:
The objective of this article was to record reporting characteristics related to study quality of research published in major specialty dental journals with the highest impact factor (Journal of Endodontics, Journal of Oral and Maxillofacial Surgery, American Journal of Orthodontics and Dentofacial Orthopedics; Pediatric Dentistry, Journal of Clinical Periodontology, and International Journal of Prosthetic Dentistry). The included articles were classified into the following 3 broad subject categories: (1) cross-sectional (snap-shot), (2) observational, and (3) interventional. Multinomial logistic regression was conducted for effect estimation using the journal as the response and randomization, sample calculation, confounding discussed, multivariate analysis, effect measurement, and confidence intervals as the explanatory variables. The results showed that cross-sectional studies were the dominant design (55%), whereas observational investigations accounted for 13%, and interventions/clinical trials for 32%. Reporting on quality characteristics was low for all variables: random allocation (15%), sample size calculation (7%), confounding issues/possible confounders (38%), effect measurements (16%), and multivariate analysis (21%). Eighty-four percent of the published articles reported a statistically significant main finding and only 13% presented confidence intervals. The Journal of Clinical Periodontology showed the highest probability of including quality characteristics in reporting results among all dental journals.
Resumo:
A central design challenge facing network planners is how to select a cost-effective network configuration that can provide uninterrupted service despite edge failures. In this paper, we study the Survivable Network Design (SND) problem, a core model underlying the design of such resilient networks that incorporates complex cost and connectivity trade-offs. Given an undirected graph with specified edge costs and (integer) connectivity requirements between pairs of nodes, the SND problem seeks the minimum cost set of edges that interconnects each node pair with at least as many edge-disjoint paths as the connectivity requirement of the nodes. We develop a hierarchical approach for solving the problem that integrates ideas from decomposition, tabu search, randomization, and optimization. The approach decomposes the SND problem into two subproblems, Backbone design and Access design, and uses an iterative multi-stage method for solving the SND problem in a hierarchical fashion. Since both subproblems are NP-hard, we develop effective optimization-based tabu search strategies that balance intensification and diversification to identify near-optimal solutions. To initiate this method, we develop two heuristic procedures that can yield good starting points. We test the combined approach on large-scale SND instances, and empirically assess the quality of the solutions vis-à-vis optimal values or lower bounds. On average, our hierarchical solution approach generates solutions within 2.7% of optimality even for very large problems (that cannot be solved using exact methods), and our results demonstrate that the performance of the method is robust for a variety of problems with different size and connectivity characteristics.
Resumo:
Background Modern methods in intensive care medicine often enable the survival of older critically ill patients. The short-term outcomes for patients treated in intensive care units (ICUs), such as survival to hospital discharge, are well documented. However, relatively little is known about subsequent long-term outcomes. Pain, anxiety and agitation are important stress factors for many critically ill patients. There are very few studies concerned with pain, anxiety and agitation and the consequences in older critically ill patients. The overall aim of this study is to identify how an ICU stay influences an older person's experiences later in life. More specific, this study has the following objectives: (1) to explore the relationship between pain, anxiety and agitation during ICU stays and experiences of the same symptoms in later life; and (2) to explore the associations between pain, anxiety and agitation experienced during ICU stays and their effect on subsequent health-related quality of life, use of the health care system (readmissions, doctor visits, rehabilitation, medication use), living situation, and survival after discharge and at 6 and 12 months of follow-up. Methods/Design A prospective, longitudinal study will be used for this study. A total of 150 older critically ill patients in the ICU will participate (ICU group). Pain, anxiety, agitation, morbidity, mortality, use of the health care system, and health-related quality of life will be measured at 3 intervals after a baseline assessment. Baseline measurements will be taken 48 hours after ICU admission and one week thereafter. Follow-up measurements will take place 6 months and 12 months after discharge from the ICU. To be able to interpret trends in scores on outcome variables in the ICU group, a comparison group of 150 participants, matched by age and gender, recruited from the Swiss population, will be interviewed at the same intervals as the ICU group. Discussion Little research has focused on long term consequences after ICU admission in older critically ill patients. The present study is specifically focussing on long term consequences of stress factors experienced during ICU admission.