91 resultados para Hybrid Capture 2
Resumo:
Background Cervical cancer and infection with human immunodeficiency virus (HIV) are both important public health problems in South Africa (SA). The aim of this study was to determine the prevalence of cervical squamous intraepithelial lesions (SILs), high-risk human papillomavirus (HR-HPV), HPV viral load and HPV genotypes in HIV positive women initiating anti-retroviral (ARV) therapy. Methods A cross-sectional survey was conducted at an anti-retroviral (ARV) treatment clinic in Cape Town, SA in 2007. Cervical specimens were taken for cytological analysis and HPV testing. The Digene Hybrid Capture 2 (HC2) test was used to detect HR-HPV. Relative light units (RLU) were used as a measure of HPV viral load. HPV types were determined using the Roche Linear Array HPV Genotyping test. Crude associations with abnormal cytology were tested and multiple logistic regression was used to determine independent risk factors for abnormal cytology. Results The median age of the 109 participants was 31 years, the median CD4 count was 125/mm3, 66.3% had an abnormal Pap smear, the HR-HPV prevalence was 78.9% (Digene), the median HPV viral load was 181.1 RLU (HC2 positive samples only) and 78.4% had multiple genotypes. Among women with abnormal smears the most prevalent HR-HPV types were HPV types 16, 58 and 51, all with a prevalence of 28.5%. On univariate analysis HR-HPV, multiple HPV types and HPV viral load were significantly associated with the presence of low and high-grade SILs (LSIL/HSIL). The multivariate logistic regression showed that HPV viral load was associated with an increased odds of LSIL/HSIL, odds ratio of 10.7 (95% CI 2.0 – 57.7) for those that were HC2 positive and had a viral load of ≤ 181.1 RLU (the median HPV viral load), and 33.8 (95% CI 6.4 – 178.9) for those that were HC2 positive with a HPV viral load > 181.1 RLU. Conclusion Women initiating ARVs have a high prevalence of abnormal Pap smears and HR-HPV. Our results underscore the need for locally relevant, rigorous screening protocols for the increasing numbers of women accessing ARV therapy so that the benefits of ARVs are not partially offset by an excess risk in cervical cancer.
Resumo:
Fire safety has become an important part in structural design due to the ever increasing loss of properties and lives during fires. Conventionally the fire rating of load bearing wall systems made of Light gauge Steel Frames (LSF) is determined using fire tests based on the standard time-temperature curve given in ISO 834 (ISO, 1999). The standard time-temperature curve given in ISO 834 (ISO, 1999) originated from the application of wood burning furnaces in the early 1900s. However, modern commercial and residential buildings make use of thermoplastic materials, which mean considerably high fuel loads. Hence a detailed fire research study into the performance of LSF walls was undertaken using the developed real fire curves based on Eurocode parametric curves (ECS, 2002) and Barnett’s BFD curves (Barnett, 2002) using both full scale fire tests and numerical studies. It included LSF walls without any insulation, and the recently developed externally insulated composite panel system. This paper presents the details of the numerical studies and the results. It also includes brief details of the development of real building fire curves and experimental studies.
Resumo:
Key topics: Since the birth of the Open Source movement in the mid-80's, open source software has become more and more widespread. Amongst others, the Linux operating system, the Apache web server and the Firefox internet explorer have taken substantial market shares to their proprietary competitors. Open source software is governed by particular types of licenses. As proprietary licenses only allow the software's use in exchange for a fee, open source licenses grant users more rights like the free use, free copy, free modification and free distribution of the software, as well as free access to the source code. This new phenomenon has raised many managerial questions: organizational issues related to the system of governance that underlie such open source communities (Raymond, 1999a; Lerner and Tirole, 2002; Lee and Cole 2003; Mockus et al. 2000; Tuomi, 2000; Demil and Lecocq, 2006; O'Mahony and Ferraro, 2007;Fleming and Waguespack, 2007), collaborative innovation issues (Von Hippel, 2003; Von Krogh et al., 2003; Von Hippel and Von Krogh, 2003; Dahlander, 2005; Osterloh, 2007; David, 2008), issues related to the nature as well as the motivations of developers (Lerner and Tirole, 2002; Hertel, 2003; Dahlander and McKelvey, 2005; Jeppesen and Frederiksen, 2006), public policy and innovation issues (Jullien and Zimmermann, 2005; Lee, 2006), technological competitions issues related to standard battles between proprietary and open source software (Bonaccorsi and Rossi, 2003; Bonaccorsi et al. 2004, Economides and Katsamakas, 2005; Chen, 2007), intellectual property rights and licensing issues (Laat 2005; Lerner and Tirole, 2005; Gambardella, 2006; Determann et al., 2007). A major unresolved issue concerns open source business models and revenue capture, given that open source licenses imply no fee for users. On this topic, articles show that a commercial activity based on open source software is possible, as they describe different possible ways of doing business around open source (Raymond, 1999; Dahlander, 2004; Daffara, 2007; Bonaccorsi and Merito, 2007). These studies usually look at open source-based companies. Open source-based companies encompass a wide range of firms with different categories of activities: providers of packaged open source solutions, IT Services&Software Engineering firms and open source software publishers. However, business models implications are different for each of these categories: providers of packaged solutions and IT Services&Software Engineering firms' activities are based on software developed outside their boundaries, whereas commercial software publishers sponsor the development of the open source software. This paper focuses on open source software publishers' business models as this issue is even more crucial for this category of firms which take the risk of investing in the development of the software. Literature at last identifies and depicts only two generic types of business models for open source software publishers: the business models of ''bundling'' (Pal and Madanmohan, 2002; Dahlander 2004) and the dual licensing business models (Välimäki, 2003; Comino and Manenti, 2007). Nevertheless, these business models are not applicable in all circumstances. Methodology: The objectives of this paper are: (1) to explore in which contexts the two generic business models described in literature can be implemented successfully and (2) to depict an additional business model for open source software publishers which can be used in a different context. To do so, this paper draws upon an explorative case study of IdealX, a French open source security software publisher. This case study consists in a series of 3 interviews conducted between February 2005 and April 2006 with the co-founder and the business manager. It aims at depicting the process of IdealX's search for the appropriate business model between its creation in 2000 and 2006. This software publisher has tried both generic types of open source software publishers' business models before designing its own. Consequently, through IdealX's trials and errors, I investigate the conditions under which such generic business models can be effective. Moreover, this study describes the business model finally designed and adopted by IdealX: an additional open source software publisher's business model based on the principle of ''mutualisation'', which is applicable in a different context. Results and implications: Finally, this article contributes to ongoing empirical work within entrepreneurship and strategic management on open source software publishers' business models: it provides the characteristics of three generic business models (the business model of bundling, the dual licensing business model and the business model of mutualisation) as well as conditions under which they can be successfully implemented (regarding the type of product developed and the competencies of the firm). This paper also goes further into the traditional concept of business model used by scholars in the open source related literature. In this article, a business model is not only considered as a way of generating incomes (''revenue model'' (Amit and Zott, 2001)), but rather as the necessary conjunction of value creation and value capture, according to the recent literature about business models (Amit and Zott, 2001; Chresbrough and Rosenblum, 2002; Teece, 2007). Consequently, this paper analyses the business models from these two components' point of view.
Resumo:
Discrete event-driven simulations of digital communication networks have been used widely. However, it is difficult to use a network simulator to simulate a hybrid system in which some objects are not discrete event-driven but are continuous time-driven. A networked control system (NCS) is such an application, in which physical process dynamics are continuous by nature. We have designed and implemented a hybrid simulation environment which effectively integrates models of continuous-time plant processes and discrete-event communication networks by extending the open source network simulator NS-2. To do this a synchronisation mechanism was developed to connect a continuous plant simulation with a discrete network simulation. Furthermore, for evaluating co-design approaches in an NCS environment, a piggybacking method was adopted to allow the control period to be adjusted during simulations. The effectiveness of the technique is demonstrated through case studies which simulate a networked control scenario in which the communication and control system properties are defined explicitly.
Resumo:
Sexually transmitted chlamydial infection initially establishes in the endocervix in females, but if the infection ascends the genital tract, significant disease, including infertility, can result. Many of the mechanisms associated with chlamydial infection kinetics and disease ascension are unknown. We attempt to elucidate some of these processes by developing a novel mathematical model, using a cellular automata–partial differential equation model. We matched our model outputs to experimental data of chlamydial infection of the guinea-pig cervix and carried out sensitivity analyses to determine the relative influence of model parameters. We found that the rate of recruitment and action of innate immune cells to clear extracellular chlamydial particles and the rate of passive movement of chlamydial particles are the dominant factors in determining the early course of infection, magnitude of the peak chlamydial time course and the time of the peak. The rate of passive movement was found to be the most important factor in determining whether infection would ascend to the upper genital tract. This study highlights the importance of early innate immunity in the control of chlamydial infection and the significance of motility-diffusive properties and the adaptive immune response in the magnitude of infection and in its ascension.
Resumo:
This article discusses a pilot project that adapted the methods of digital storytelling and oral history to capture a range of personal responses to the official Apology to Australia’s Indigenous Peoples delivered by Prime Minister Kevin Rudd on 13 February 2008. The project was an initiative of State Library of Queensland and resulted in a small collection of multimedia stories, incorporating a variety of personal and political perspectives. The article describes how the traditional digital storytelling workshop method was adapted for use in the project, and then proceeds to reflect on the outcomes and continuing life of the project. The article concludes by suggesting that aspects of the resultant model might be applied to other projects carried out by cultural institutions and community-based media organizations.
Resumo:
This paper proposes a novel Hybrid Clustering approach for XML documents (HCX) that first determines the structural similarity in the form of frequent subtrees and then uses these frequent subtrees to represent the constrained content of the XML documents in order to determine the content similarity. The empirical analysis reveals that the proposed method is scalable and accurate.
Resumo:
3D Motion capture is a medium that plots motion, typically human motion, converting it into a form that can be represented digitally. It is a fast evolving field and recent inertial technology may provide new artistic possibilities for its use in live performance. Although not often used in this context, motion capture has a combination of attributes that can provide unique forms of collaboration with performance arts. The inertial motion capture suit used for this study has orientation sensors placed at strategic points on the body to map body motion. Its portability, real-time performance, ease of use, and its immunity from line-of-sight problems inherent in optical systems suggest it would work well as a live performance technology. Many animation techniques can be used in real-time. This research examines a broad cross-section of these techniques using four practice-led cases to assess the suitability of inertial motion capture to live performance. Although each case explores different visual possibilities, all make use of the performativity of the medium, using either an improvisational format or interactivity among stage, audience and screen that would be difficult to emulate any other way. A real-time environment is not capable of reproducing the depth and sophistication of animation people have come to expect through media. These environments take many hours to render. In time the combination of what can be produced in real-time and the tools available in a 3D environment will no doubt create their own tree of aesthetic directions in live performance. The case study looks at the potential of interactivity that this technology offers.
Resumo:
The central aim for the research undertaken in this PhD thesis is the development of a model for simulating water droplet movement on a leaf surface and to compare the model behavior with experimental observations. A series of five papers has been presented to explain systematically the way in which this droplet modelling work has been realised. Knowing the path of the droplet on the leaf surface is important for understanding how a droplet of water, pesticide, or nutrient will be absorbed through the leaf surface. An important aspect of the research is the generation of a leaf surface representation that acts as the foundation of the droplet model. Initially a laser scanner is used to capture the surface characteristics for two types of leaves in the form of a large scattered data set. After the identification of the leaf surface boundary, a set of internal points is chosen over which a triangulation of the surface is constructed. We present a novel hybrid approach for leaf surface fitting on this triangulation that combines Clough-Tocher (CT) and radial basis function (RBF) methods to achieve a surface with a continuously turning normal. The accuracy of the hybrid technique is assessed using numerical experimentation. The hybrid CT-RBF method is shown to give good representations of Frangipani and Anthurium leaves. Such leaf models facilitate an understanding of plant development and permit the modelling of the interaction of plants with their environment. The motion of a droplet traversing this virtual leaf surface is affected by various forces including gravity, friction and resistance between the surface and the droplet. The innovation of our model is the use of thin-film theory in the context of droplet movement to determine the thickness of the droplet as it moves on the surface. Experimental verification shows that the droplet model captures reality quite well and produces realistic droplet motion on the leaf surface. Most importantly, we observed that the simulated droplet motion follows the contours of the surface and spreads as a thin film. In the future, the model may be applied to determine the path of a droplet of pesticide along a leaf surface before it falls from or comes to a standstill on the surface. It will also be used to study the paths of many droplets of water or pesticide moving and colliding on the surface.
Resumo:
A novel H-bridge multilevel PWM converter topology based on a series connection of a high voltage (HV) diode-clamped inverter and a low voltage (LV) conventional inverter is proposed. A DC link voltage arrangement for the new hybrid and asymmetric solution is presented to have a maximum number of output voltage levels by preserving the adjacent switching vectors between voltage levels. Hence, a fifteen-level hybrid converter can be attained with a minimum number of power components. A comparative study has been carried out to present high performance of the proposed configuration to approach a very low THD of voltage and current, which leads to the possible elimination of output filter. Regarding the proposed configuration, a new cascade inverter is verified by cascading an asymmetrical diode-clamped inverter, in which nineteen levels can be synthesized in output voltage with the same number of components. To balance the DC link capacitor voltages for the maximum output voltage resolution as well as synthesise asymmetrical DC link combination, a new Multi-output Boost (MOB) converter is utilised at the DC link voltage of a seven-level H-bridge diode-clamped inverter. Simulation and hardware results based on different modulations are presented to confirm the validity of the proposed approach to achieve a high quality output voltage.
Resumo:
The following exegesis will detail the key advantages and disadvantages of combining a traditional talk show genre with a linear documentary format using a small production team and a limited budget in a fast turnaround weekly environment. It will deal with the Australian Broadcasting Corporation series Talking Heads, broadcast weekly in the early evening schedule for the network at 18.30 with the presenter Peter Thompson. As Executive Producer for the programme at its inception I was responsible for setting it up for the ABC in Brisbane, a role that included selecting most of the team to work on the series and commissioning the music, titles and all other aspects required to bring the show to the screen. What emerged when producing this generic hybrid will be examined at length, including: „h The talk show/documentary hybrid format needs longer than 26¡¦30¡¨ to be entirely successful. „h The type of presenter ideally suited to the talk show/documentary format requires someone who is genuinely interested in their guests and flexible enough to maintain the format against tangential odds. „h The use of illustrative footage shot in a documentary style narrative improves the talk show format. iv „h The fast turnaround of the talk show/documentary hybrid puts tremendous pressure on the time frames for archive research and copyright clearance and therefore needs to be well-resourced. „h In a fast turnaround talk show/documentary format the field components are advantageous but require very low shooting ratios to be sustainable. „h An intimate set works best for a talk show hybrid like this. Also submitted are two DVDs of recordings of programmes I produced and directed from the first and third series. These are for consideration in the practical component of this project and reflect the changes that I made to the series.
Resumo:
In the paper, the flow-shop scheduling problem with parallel machines at each stage (machine center) is studied. For each job its release and due date as well as a processing time for its each operation are given. The scheduling criterion consists of three parts: the total weighted earliness, the total weighted tardiness and the total weighted waiting time. The criterion takes into account the costs of storing semi-manufactured products in the course of production and ready-made products as well as penalties for not meeting the deadlines stated in the conditions of the contract with customer. To solve the problem, three constructive algorithms and three metaheuristics (based one Tabu Search and Simulated Annealing techniques) are developed and experimentally analyzed. All the proposed algorithms operate on the notion of so-called operation processing order, i.e. the order of operations on each machine. We show that the problem of schedule construction on the base of a given operation processing order can be reduced to the linear programming task. We also propose some approximation algorithm for schedule construction and show the conditions of its optimality.
Resumo:
This paper investigates virtual reality representations of the 1599 Boar’s Head Theatre and the Rose Theatre, two renaissance places and spaces. These models become a “world elsewhere” in that they represent virtual recreations of these venues in as much detail as possible. The models are based on accurate archeological and theatre historical records and are easy to navigate particularly for current use. This paper demonstrates the ways in which these models can be instructive for reading theatre today. More importantly we introduce human figures onto the stage via motion capture which allows us to explore the potential between space, actor and environment. This facilitates a new way of thinking about early modern playwrights’ “attitudes to locality and localities large and small”. These venues are thus activated to intersect productively with early modern studies so that the paper can test the historical and contemporary limits of such research.