953 resultados para General-purpose computing
Resumo:
Show caves provide tourists with the opportunity to have close contact with natural underground spaces. However, visitation to these places also creates a need for management measures, mainly the definition of tourist carrying capacity. The present work describes the results of climate monitoring and atmospheric profiling performed in Santana Cave (Alto Ribeira State and Tourist Park - PETAR, Brazil) between 2008 and 2011. Based on the results, distinct preliminary zones with different levels of thermal variation were identified, which classify Santana Cave as a warm trap. Two critical points along the tourist route (Cristo and Encontro Halls) were identified where the temperature of the locality increased by 1.3 degrees C when tourists were present. Air flow from the inner cave to the outside occurs during the austral summer, and the opposite flow occurs when the outside environment is colder than the air inside the cave during the austral winter. The temperature was used to establish thresholds to the tourist carrying capacity by computing the recovery time of the atmospheric conditions after the changes caused by the presence of tourists. This method suggests a maximum limit of approximately 350 visits per day to Santana Cave. The conclusion of the study is that Santana Cave has an atmosphere that is highly connected with the outside; daily variations in temperature and, to a lesser extent, in the relative humidity occur throughout the entire studied area of the cave. Therefore, the tourist carrying capacity in Santana Cave can be flexible and can be implemented based on the climate seasonality, the tourism demand and other management strategies.
Resumo:
[EN] We propose four algorithms for computing the inverse optical flow between two images. We assume that the forward optical flow has already been obtained and we need to estimate the flow in the backward direction. The forward and backward flows can be related through a warping formula, which allows us to propose very efficient algorithms. These are presented in increasing order of complexity. The proposed methods provide high accuracy with low memory requirements and low running times.In general, the processing reduces to one or two image passes. Typically, when objects move in a sequence, some regions may appear or disappear. Finding the inverse flows in these situations is difficult and, in some cases, it is not possible to obtain a correct solution. Our algorithms deal with occlusions very easy and reliably. On the other hand, disocclusions have to be overcome as a post-processing step. We propose three approaches for filling disocclusions. In the experimental results, we use standard synthetic sequences to study the performance of the proposed methods, and show that they yield very accurate solutions. We also analyze the performance of the filling strategies.
Resumo:
Service Oriented Computing is a new programming paradigm for addressing distributed system design issues. Services are autonomous computational entities which can be dynamically discovered and composed in order to form more complex systems able to achieve different kinds of task. E-government, e-business and e-science are some examples of the IT areas where Service Oriented Computing will be exploited in the next years. At present, the most credited Service Oriented Computing technology is that of Web Services, whose specifications are enriched day by day by industrial consortia without following a precise and rigorous approach. This PhD thesis aims, on the one hand, at modelling Service Oriented Computing in a formal way in order to precisely define the main concepts it is based upon and, on the other hand, at defining a new approach, called bipolar approach, for addressing system design issues by synergically exploiting choreography and orchestration languages related by means of a mathematical relation called conformance. Choreography allows us to describe systems of services from a global view point whereas orchestration supplies a means for addressing such an issue from a local perspective. In this work we present SOCK, a process algebra based language inspired by the Web Service orchestration language WS-BPEL which catches the essentials of Service Oriented Computing. From the definition of SOCK we will able to define a general model for dealing with Service Oriented Computing where services and systems of services are related to the design of finite state automata and process algebra concurrent systems, respectively. Furthermore, we introduce a formal language for dealing with choreography. Such a language is equipped with a formal semantics and it forms, together with a subset of the SOCK calculus, the bipolar framework. Finally, we present JOLIE which is a Java implentation of a subset of the SOCK calculus and it is part of the bipolar framework we intend to promote.
Resumo:
The present work tries to display a comprehensive and comparative study of the different legal and regulatory problems involved in international securitization transactions. First, an introduction to securitization is provided, with the basic elements of the transaction, followed by the different varieties of it, including dynamic securitization and synthetic securitization structures. Together with this introduction to the intricacies of the structure, a insight into the influence of securitization in the financial and economic crisis of 2007-2009 is provided too; as well as an overview of the process of regulatory competition and cooperation that constitutes the framework for the international aspects of securitization. The next Chapter focuses on the aspects that constitute the foundations of structured finance: the inception of the vehicle, and the transfer of risks associated to the securitized assets, with particular emphasis on the validity of those elements, and how a securitization transaction could be threatened at its root. In this sense, special importance is given to the validity of the trust as an instrument of finance, to the assignment of future receivables or receivables in block, and to the importance of formalities for the validity of corporations, trusts, assignments, etc., and the interaction of such formalities contained in general corporate, trust and assignment law with those contemplated under specific securitization regulations. Then, the next Chapter (III) focuses on creditor protection aspects. As such, we provide some insights on the debate on the capital structure of the firm, and its inadequacy to assess the financial soundness problems inherent to securitization. Then, we proceed to analyze the importance of rules on creditor protection in the context of securitization. The corollary is in the rules in case of insolvency. In this sense, we divide the cases where a party involved in the transaction goes bankrupt, from those where the transaction itself collapses. Finally, we focus on the scenario where a substance over form analysis may compromise some of the elements of the structure (notably the limited liability of the sponsor, and/or the transfer of assets) by means of veil piercing, substantive consolidation, or recharacterization theories. Once these elements have been covered, the next Chapters focus on the regulatory aspects involved in the transaction. Chapter IV is more referred to “market” regulations, i.e. those concerned with information disclosure and other rules (appointment of the indenture trustee, and elaboration of a rating by a rating agency) concerning the offering of asset-backed securities to the public. Chapter V, on the other hand, focuses on “prudential” regulation of the entity entrusted with securitizing assets (the so-called Special Purpose vehicle), and other entities involved in the process. Regarding the SPV, a reference is made to licensing requirements, restriction of activities and governance structures to prevent abuses. Regarding the sponsor of the transaction, a focus is made on provisions on sound originating practices, and the servicing function. Finally, we study accounting and banking regulations, including the Basel I and Basel II Frameworks, which determine the consolidation of the SPV, and the de-recognition of the securitized asset from the originating company’s balance-sheet, as well as the posterior treatment of those assets, in particular by banks. Chapters VI-IX are concerned with liability matters. Chapter VI is an introduction to the different sources of liability. Chapter VII focuses on the liability by the SPV and its management for the information supplied to investors, the management of the asset pool, and the breach of loyalty (or fiduciary) duties. Chapter VIII rather refers to the liability of the originator as a result of such information and statements, but also as a result of inadequate and reckless originating or servicing practices. Chapter IX finally focuses on third parties entrusted with the soundness of the transaction towards the market, the so-called gatekeepers. In this respect, we make special emphasis on the liability of indenture trustees, underwriters and rating agencies. Chapters X and XI focus on the international aspects of securitization. Chapter X contains a conflicts of laws analysis of the different aspects of structured finance. In this respect, a study is made of the laws applicable to the vehicle, to the transfer of risks (either by assignment or by means of derivatives contracts), to liability issues; and a study is also made of the competent jurisdiction (and applicable law) in bankruptcy cases; as well as in cases where a substance-over-form is performed. Then, special attention is also devoted to the role of financial and securities regulations; as well as to their territorial limits, and extraterritoriality problems involved. Chapter XI supplements the prior Chapter, for it analyzes the limits to the States’ exercise of regulatory power by the personal and “market” freedoms included in the US Constitution or the EU Treaties. A reference is also made to the (still insufficient) rules from the WTO Framework, and their significance to the States’ recognition and regulation of securitization transactions.
Resumo:
Ein neu entwickeltes globales Atmosphärenchemie- und Zirkulationsmodell (ECHAM5/MESSy1) wurde verwendet um die Chemie und den Transport von Ozonvorläufersubstanzen zu untersuchen, mit dem Schwerpunkt auf Nichtmethankohlenwasserstoffen. Zu diesem Zweck wurde das Modell durch den Vergleich der Ergebnisse mit Messungen verschiedenen Ursprungs umfangreich evaluiert. Die Analyse zeigt, daß das Modell die Verteilung von Ozon realistisch vorhersagt, und zwar sowohl die Menge als auch den Jahresgang. An der Tropopause gibt das Modell den Austausch zwischen Stratosphäre und Troposphäre ohne vorgeschriebene Flüsse oder Konzentrationen richtig wieder. Das Modell simuliert die Ozonvorläufersubstanzen mit verschiedener Qualität im Vergleich zu den Messungen. Obwohl die Alkane vom Modell gut wiedergeben werden, ergibt sich einige Abweichungen für die Alkene. Von den oxidierten Substanzen wird Formaldehyd (HCHO) richtig wiedergegeben, während die Korrelationen zwischen Beobachtungen und Modellergebnissen für Methanol (CH3OH) und Aceton (CH3COCH3) weitaus schlechter ausfallen. Um die Qualität des Modells im Bezug auf oxidierte Substanzen zu verbessern, wurden einige Sensitivitätsstudien durchgeführt. Diese Substanzen werden durch Emissionen/Deposition von/in den Ozean beeinflußt, und die Kenntnis über den Gasaustausch mit dem Ozean ist mit großen Unsicherheiten behaftet. Um die Ergebnisse des Modells ECHAM5/MESSy1 zu verbessern wurde das neue Submodell AIRSEA entwickelt und in die MESSy-Struktur integriert. Dieses Submodell berücksichtigt den Gasaustausch zwischen Ozean und Atmosphäre einschließlich der oxidierten Substanzen. AIRSEA, welches Informationen über die Flüssigphasenkonzentration des Gases im Oberflächenwasser des Ozeans benötigt wurde ausgiebig getestet. Die Anwendung des neuen Submodells verbessert geringfügig die Modellergebnisse für Aceton und Methanol, obwohl die Verwendung einer vorgeschriebenen Flüssigphasenkonzentration stark den Erfolg der Methode einschränkt, da Meßergebnisse nicht in ausreichendem Maße zu Verfügung stehen. Diese Arbeit vermittelt neue Einsichten über organische Substanzen. Sie stellt die Wichtigkeit der Kopplung zwischen Ozean und Atmosphäre für die Budgets vieler Gase heraus.
Resumo:
Se realizaron tres estudios cualitativos que tuvieron como propósito conocer las representaciones que ha construido la población general, los pacientes oncológicos y los profesionales de la salud, sobre el cáncer, la quimioterapia y el trasplante de médula ósea y realizar un análisis sobre las semejanzas y diferencias entre ellos. Se realizó en la ciudad de Bogotá (Colombia) con 55 personas: 20 pacientes con cáncer en proceso de trasplante de médula ósea, 20 personas no diagnosticadas con cáncer y 15 personas que trabajan en la atención de pacientes con cáncer. Se realizó una entrevista en profundidad con todos los participantes y asociaciones libres, clásicas y por sustitución sobre las palabras “cáncer”, “quimioterapia” y “trasplante de médula”. Los datos conseguidos se analizaron a la luz de la Teoría de las Representaciones Sociales (TRS). El análisis de la información siguió la técnica de análisis cualitativo de contenido para encontrar significados simbólicos y construir, denominar y definir categorías. Para los tres grupos el cáncer es una enfermedad terrible, que puede llevar a la muerte. El personal de salud y la población general creen que la enfermedad genera terror, angustia y miedo. Los pacientes tienen conciencia de la gravedad y del temor consecuente por una enfermedad que lo cambia todo, produce sufrimiento, dolor, obliga a depender de alguien y puede conducir a la muerte. El personal de salud considera que los pacientes lo pueden vivir como un castigo y la población general que puede ser la consecuencia de estilos de vida poco saludables. Para todos, la quimioterapia es un tratamiento para la enfermedad, que por un lado presenta efectos colaterales difíciles y visibles y que producen sentimientos negativos de temor y de angustia y al mismo tiempo constituye una opción y posibilidad de curación. El Trasplante de Médula Ósea representa para todos una oportunidad.
Resumo:
This thesis deals with heterogeneous architectures in standard workstations. Heterogeneous architectures represent an appealing alternative to traditional supercomputers because they are based on commodity components fabricated in large quantities. Hence their price-performance ratio is unparalleled in the world of high performance computing (HPC). In particular, different aspects related to the performance and consumption of heterogeneous architectures have been explored. The thesis initially focuses on an efficient implementation of a parallel application, where the execution time is dominated by an high number of floating point instructions. Then the thesis touches the central problem of efficient management of power peaks in heterogeneous computing systems. Finally it discusses a memory-bounded problem, where the execution time is dominated by the memory latency. Specifically, the following main contributions have been carried out: A novel framework for the design and analysis of solar field for Central Receiver Systems (CRS) has been developed. The implementation based on desktop workstation equipped with multiple Graphics Processing Units (GPUs) is motivated by the need to have an accurate and fast simulation environment for studying mirror imperfection and non-planar geometries. Secondly, a power-aware scheduling algorithm on heterogeneous CPU-GPU architectures, based on an efficient distribution of the computing workload to the resources, has been realized. The scheduler manages the resources of several computing nodes with a view to reducing the peak power. The two main contributions of this work follow: the approach reduces the supply cost due to high peak power whilst having negligible impact on the parallelism of computational nodes. from another point of view the developed model allows designer to increase the number of cores without increasing the capacity of the power supply unit. Finally, an implementation for efficient graph exploration on reconfigurable architectures is presented. The purpose is to accelerate graph exploration, reducing the number of random memory accesses.
Resumo:
The research for exact solutions of mixed integer problems is an active topic in the scientific community. State-of-the-art MIP solvers exploit a floating- point numerical representation, therefore introducing small approximations. Although such MIP solvers yield reliable results for the majority of problems, there are cases in which a higher accuracy is required. Indeed, it is known that for some applications floating-point solvers provide falsely feasible solutions, i.e. solutions marked as feasible because of approximations that would not pass a check with exact arithmetic and cannot be practically implemented. The framework of the current dissertation is SCIP, a mixed integer programs solver mainly developed at Zuse Institute Berlin. In the same site we considered a new approach for exactly solving MIPs. Specifically, we developed a constraint handler to plug into SCIP, with the aim to analyze the accuracy of provided floating-point solutions and compute exact primal solutions starting from floating-point ones. We conducted a few computational experiments to test the exact primal constraint handler through the adoption of two main settings. Analysis mode allowed to collect statistics about current SCIP solutions' reliability. Our results confirm that floating-point solutions are accurate enough with respect to many instances. However, our analysis highlighted the presence of numerical errors of variable entity. By using the enforce mode, our constraint handler is able to suggest exact solutions starting from the integer part of a floating-point solution. With the latter setting, results show a general improvement of the quality of provided final solutions, without a significant loss of performances.
Resumo:
Nowadays, data handling and data analysis in High Energy Physics requires a vast amount of computational power and storage. In particular, the world-wide LHC Com- puting Grid (LCG), an infrastructure and pool of services developed and deployed by a ample community of physicists and computer scientists, has demonstrated to be a game changer in the efficiency of data analyses during Run-I at the LHC, playing a crucial role in the Higgs boson discovery. Recently, the Cloud computing paradigm is emerging and reaching a considerable adoption level by many different scientific organizations and not only. Cloud allows to access and utilize not-owned large computing resources shared among many scientific communities. Considering the challenging requirements of LHC physics in Run-II and beyond, the LHC computing community is interested in exploring Clouds and see whether they can provide a complementary approach - or even a valid alternative - to the existing technological solutions based on Grid. In the LHC community, several experiments have been adopting Cloud approaches, and in particular the experience of the CMS experiment is of relevance to this thesis. The LHC Run-II has just started, and Cloud-based solutions are already in production for CMS. However, other approaches of Cloud usage are being thought of and are at the prototype level, as the work done in this thesis. This effort is of paramount importance to be able to equip CMS with the capability to elastically and flexibly access and utilize the computing resources needed to face the challenges of Run-III and Run-IV. The main purpose of this thesis is to present forefront Cloud approaches that allow the CMS experiment to extend to on-demand resources dynamically allocated as needed. Moreover, a direct access to Cloud resources is presented as suitable use case to face up with the CMS experiment needs. Chapter 1 presents an overview of High Energy Physics at the LHC and of the CMS experience in Run-I, as well as preparation for Run-II. Chapter 2 describes the current CMS Computing Model, and Chapter 3 provides Cloud approaches pursued and used within the CMS Collaboration. Chapter 4 and Chapter 5 discuss the original and forefront work done in this thesis to develop and test working prototypes of elastic extensions of CMS computing resources on Clouds, and HEP Computing “as a Service”. The impact of such work on a benchmark CMS physics use-cases is also demonstrated.
Resumo:
The purpose of this thesis is to identify areas for improvement in the current stakeholder management literature. The current stakeholder management theories were analyzed to determine their benefits and detriments. To determine how these theories work in a corporation, General Motors was selected as a single-case study to determine the patterns of stakeholder management over time. These patterns demonstrated the need for dynamic stakeholder management over time, with an emphasis on collaboration and the necessity of recognizing the greater stakeholder network surrounding the corporation. Proper stakeholder management in the early years of General Motors would have prevented its failure, while the organizational culture as a path-dependent variable made it difficult for General Motors to alter long-standing stakeholder relationships.
Resumo:
PURPOSE: Understanding the learning styles of individuals may assist in the tailoring of an educational program to optimize learning. General surgery faculty and residents have been characterized previously as having a tendency toward particular learning styles. We seek to understand better the learning styles of general surgery residents and differences that may exist within the population. METHODS: The Kolb Learning Style Inventory was administered yearly to general surgery residents at the University of Cincinnati from 1994 to 2006. This tool allows characterization of learning styles into 4 groups: converging, accommodating, assimilating, and diverging. The converging learning style involves education by actively solving problems. The accommodating learning style uses emotion and interpersonal relationships. The assimilating learning style learns by abstract logic. The diverging learning style learns best by observation. Chi-square analysis and analysis of variance were performed to determine significance. RESULTS: Surveys from 1994 to 2006 (91 residents, 325 responses) were analyzed. The prevalent learning style was converging (185, 57%), followed by assimilating (58, 18%), accommodating (44, 14%), and diverging (38, 12%). At the PGY 1 and 2 levels, male and female residents differed in learning style, with the accommodating learning style being relatively more frequent in women and assimilating learning style more frequent in men (Table 1, p < or = 0.001, chi-square test). Interestingly, learning style did not seem to change with advancing PGY level within the program, which suggests that individual learning styles may be constant throughout residency training. If a resident's learning style changed, it tended to be to converging. In addition, no relation exists between learning style and participation in dedicated basic science training or performance on the ABSIT/SBSE. CONCLUSIONS: Our data suggests that learning style differs between male and female general surgery residents but not with PGY level or ABSIT/SBSE performance. A greater understanding of individual learning styles may allow more refinement and tailoring of surgical programs.
Resumo:
BACKGROUND: there is inadequate evidence to support currently formulated NHS strategies to achieve health promotion and preventative care in older people through broad-based screening and assessment in primary care. The most extensively evaluated delivery instrument for this purpose is Health Risk Appraisal (HRA). This article describes a trial using HRA to evaluate the effect on health behaviour and preventative-care uptake in older people in NHS primary care. METHODS: a randomised controlled trial was undertaken in three London primary care group practices. Functionally independent community-dwelling patients older than 65 years (n = 2,503) received a self-administered Health Risk Appraisal for Older Persons (HRA-O) questionnaire leading to computer-generated individualised written feedback to participants and general practitioners (GPs), integrated into practice information-technology (IT) systems. All primary care staff received training in preventative health in older people. The main outcome measures were self-reported health behaviour and preventative care uptake at 1-year follow-up. RESULTS: of 2,503 individuals randomised, 2,006 respondents (80.1%) (intervention, n = 940, control n = 1,066) were available for analysis. Intervention group respondents reported slightly higher pneumococcal vaccination uptake and equivocal improvement in physical activity levels compared with controls. No significant differences were observed for any other categories of health behaviour or preventative care measures at 1-year follow-up. CONCLUSIONS: HRA-O implemented in this way resulted in minimal improvement of health behaviour or uptake of preventative care measures in older people. Supplementary reinforcement involving contact by health professionals with patients over and above routine clinical encounters may be a prerequisite to the effectiveness of IT-based delivery systems for health promotion in older people.
Resumo:
While clinical studies have shown a negative relationship between obesity and mental health in women, population studies have not shown a consistent association. However, many of these studies can be criticized regarding fatness level criteria, lack of control variables, and validity of the psychological variables.^ The purpose of this research was to elucidate the relationship between fatness level and mental health in United States women using data from the First National Health and Nutrition Examination Survey (NHANES I), which was conducted on a national probability sample from 1971 to 1974. Mental health was measured by the General Well-Being Schedule (GWB), and fatness level was determined by the sum of the triceps and subscapular skinfolds. Women were categorized as lean (15th percentile or less), normal (16th to 84th percentiles), or obese (85th percentile or greater).^ A conceptual framework was developed which identified the variables of age, race, marital status, socioeconomic status (education), employment status, number of births, physical health, weight history, and perception of body image as important to the fatness level-GWB relationship. Multiple regression analyses were performed separately for whites and blacks with GWB as the response variable, and fatness level, age, education, employment status, number of births, marital status, and health perception as predictor variables. In addition, 2- and 3-way interaction terms for leanness, obesity and age were included as predictor variables. Variables related to weight history and perception of body image were not collected in NHANES I, and thus were not included in this study.^ The results indicated that obesity was a statistically significant predictor of lower GWB in white women even when the other predictor variables were controlled. The full regression model identified the young, more educated, obese female as a subgroup with lower GWB, especially in blacks. These findings were not consistent with the previous non-clinical studies which found that obesity was associated with better mental health. The social stigma of being obese and the preoccupation of women with being lean may have contributed to the lower GWB in these women. ^
Resumo:
The purpose of this study was to investigate the generality and temporal endurance of the bivalency effect in task switching. This effect refers to the slowing on univalent stimuli that occurs when bivalent stimuli appear occasionally. We used a paradigm involving predictable switches between 3 simple tasks, with bivalent stimuli occasionally occurring on one of the tasks. The generality of the bivalency effect was investigated by using different tasks and different types of bivalent stimuli, and the endurance of this effect was investigated across different intertrial intervals (ITIs) and across the univalent trials that followed trials with bivalent stimuli. In 3 experiments, the results showed a general, robust, and enduring bivalency effect for all ITI conditions. Although the effect declined across trials, it remained significant for about 4 trials following one with a bivalent stimulus. Our findings emphasise the importance of top–down processes in task-switching performance. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Resumo:
PURPOSE: The goal of the study was to assess the causes and analyze the cases of sudden cardiac death (SCD) victims referred to the department of forensic medicine in Lausanne, with a particular focus on sports-related fatalities including also leisure sporting activities. To date, no such published assessment has been done nor for Switzerland nor for the central Europe. METHODS: This is a retrospective study based on autopsy records of SCD victims, from 10 to 50 years of age, performed at the University Centre of Legal Medicine in Lausanne from 1995 to 2010. The study population was divided into two groups: sport-related (SR) and not sport-related (NSR) SCDs. RESULTS: During the study period, 188 cases of SCD were recorded: 166 (88%) were NSR and 22 (12%) SR. The mean age of the 188 victims was 37.3 +/- 10.1 years, with the majority of the cases being male (79%). A cause of death was established in 84%, and the pathology responsible for death varied according to the age of the victims. In the NSR group, the mean age was 38.2 +/- 9.2 years and there was 82% of male. Coronary artery disease (CAD) was the main diagnosis in the victims aged 30-50 years. The majority of morphologically normal hearts were observed in the 15-29 year age range. There was no case in the 10-14 year age range. In the SR group, 91% of victims died during leisure sporting activities. In this group the mean age was 30.5 +/- 13.5 years, with the majority being male (82%). The main cause of death was CAD, with 6 cases (27%) and a mean age of 40.8 +/- 5.5 years. The youngest victim with CAD was 33 years old. A morphologically normal heart was observed in 5 cases (23%), with a mean age of 24.4 +/- 14.9 years. The most frequently implicated sporting activities were hiking (26%) and swimming (17%). CONCLUSION: In this study, CAD was the most common cause of death in both groups. Although this pathology most often affects adults over 35 years of age, there were also some victims under 35 years of age in both groups. SCDs during sport are mostly related to leisure sporting activities, for which preventive measures are not yet usually established. This study highlights also the need to inform both athletes and non athletes of the cardiovascular risks during sport activities and the role of a forensic autopsy and registries involving forensic pathologists for SR SCD.