884 resultados para Fully automated


Relevância:

30.00% 30.00%

Publicador:

Resumo:

An underwater gas pipeline is the portion of the pipeline that crosses a river beneath its bottom. Underwater gas pipelines are subject to increasing dangers as time goes by. An accident at an underwater gas pipeline can lead to technological and environmental disaster on the scale of an entire region. Therefore, timely troubleshooting of all underwater gas pipelines in order to prevent any potential accidents will remain a pressing task for the industry. The most important aspect of resolving this challenge is the quality of the automated system in question. Now the industry doesn't have any automated system that fully meets the needs of the experts working in the field maintaining underwater gas pipelines. Principle Aim of this Research: This work aims to develop a new system of automated monitoring which would simplify the process of evaluating the technical condition and decision making on planning and preventive maintenance and repair work on the underwater gas pipeline. Objectives: Creation a shared model for a new, automated system via IDEF3; Development of a new database system which would store all information about underwater gas pipelines; Development a new application that works with database servers, and provides an explanation of the results obtained from the server; Calculation of the values MTBF for specified pipelines based on quantitative data obtained from tests of this system. Conclusion: The new, automated system PodvodGazExpert has been developed for timely and qualitative determination of the physical conditions of underwater gas pipeline; The basis of the mathematical analysis of this new, automated system uses principal component analysis method; The process of determining the physical condition of an underwater gas pipeline with this new, automated system increases the MTBF by a factor of 8.18 above the existing system used today in the industry.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The impact of peritoneal dialysis modality on patient survival and peritonitis rates is not fully understood, and no large-scale randomized clinical trial (RCT) is available. In the absence of a RCT, the use of an advanced matching procedure to reduce selection bias in large cohort studies may be the best approach. The aim of this study is to compare automated peritoneal dialysis (APD) and continuous ambulatory peritoneal dialysis (CAPD) according to peritonitis risk, technique failure and patient survival in a large nation-wide PD cohort. This is a prospective cohort study that included all incident PD patients with at least 90 days of PD recruited in the BRAZPD study. All patients who were treated exclusively with either APD or CAPD were matched for 15 different covariates using a propensity score calculated with the nearest neighbor method. Clinical outcomes analyzed were overall mortality, technique failure and time to first peritonitis. For all analysis we also adjusted the curves for the presence of competing risks with the Fine and Gray analysis. After the matching procedure, 2,890 patients were included in the analysis (1,445 in each group). Baseline characteristics were similar for all covariates including: age, diabetes, BMI, Center-experience, coronary artery disease, cancer, literacy, hypertension, race, previous HD, gender, pre-dialysis care, family income, peripheral artery disease and year of starting PD. Mortality rate was higher in CAPD patients (SHR1.44 CI95%1.21-1.71) compared to APD, but no difference was observed for technique failure (SHR0.83 CI95%0.69-1.02) nor for time till the first peritonitis episode (SHR0.96 CI95%0.93-1.11). In the first large PD cohort study with groups balanced for several covariates using propensity score matching, PD modality was not associated with differences in neither time to first peritonitis nor in technique failure. Nevertheless, patient survival was significantly better in APD patients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE The aim of the present study was to evaluate a dose reduction in contrast-enhanced chest computed tomography (CT) by comparing the three latest generations of Siemens CT scanners used in clinical practice. We analyzed the amount of radiation used with filtered back projection (FBP) and an iterative reconstruction (IR) algorithm to yield the same image quality. Furthermore, the influence on the radiation dose of the most recent integrated circuit detector (ICD; Stellar detector, Siemens Healthcare, Erlangen, Germany) was investigated. MATERIALS AND METHODS 136 Patients were included. Scan parameters were set to a thorax routine: SOMATOM Sensation 64 (FBP), SOMATOM Definition Flash (IR), and SOMATOM Definition Edge (ICD and IR). Tube current was set constantly to the reference level of 100 mA automated tube current modulation using reference milliamperes. Care kV was used on the Flash and Edge scanner, while tube potential was individually selected between 100 and 140 kVp by the medical technologists at the SOMATOM Sensation. Quality assessment was performed on soft-tissue kernel reconstruction. Dose was represented by the dose length product. RESULTS Dose-length product (DLP) with FBP for the average chest CT was 308 mGy*cm ± 99.6. In contrast, the DLP for the chest CT with IR algorithm was 196.8 mGy*cm ± 68.8 (P = 0.0001). Further decline in dose can be noted with IR and the ICD: DLP: 166.4 mGy*cm ± 54.5 (P = 0.033). The dose reduction compared to FBP was 36.1% with IR and 45.6% with IR/ICD. Signal-to-noise ratio (SNR) was favorable in the aorta, bone, and soft tissue for IR/ICD in combination compared to FBP (the P values ranged from 0.003 to 0.048). Overall contrast-to-noise ratio (CNR) improved with declining DLP. CONCLUSION The most recent technical developments, namely IR in combination with integrated circuit detectors, can significantly lower radiation dose in chest CT examinations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Information about the size of a tumor and its temporal evolution is needed for diagnosis as well as treatment of brain tumor patients. The aim of the study was to investigate the potential of a fully-automatic segmentation method, called BraTumIA, for longitudinal brain tumor volumetry by comparing the automatically estimated volumes with ground truth data acquired via manual segmentation. Longitudinal Magnetic Resonance (MR) Imaging data of 14 patients with newly diagnosed glioblastoma encompassing 64 MR acquisitions, ranging from preoperative up to 12 month follow-up images, was analysed. Manual segmentation was performed by two human raters. Strong correlations (R = 0.83-0.96, p < 0.001) were observed between volumetric estimates of BraTumIA and of each of the human raters for the contrast-enhancing (CET) and non-enhancing T2-hyperintense tumor compartments (NCE-T2). A quantitative analysis of the inter-rater disagreement showed that the disagreement between BraTumIA and each of the human raters was comparable to the disagreement between the human raters. In summary, BraTumIA generated volumetric trend curves of contrast-enhancing and non-enhancing T2-hyperintense tumor compartments comparable to estimates of human raters. These findings suggest the potential of automated longitudinal tumor segmentation to substitute manual volumetric follow-up of contrast-enhancing and non-enhancing T2-hyperintense tumor compartments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

INTAMAP is a Web Processing Service for the automatic spatial interpolation of measured point data. Requirements were (i) using open standards for spatial data such as developed in the context of the Open Geospatial Consortium (OGC), (ii) using a suitable environment for statistical modelling and computation, and (iii) producing an integrated, open source solution. The system couples an open-source Web Processing Service (developed by 52°North), accepting data in the form of standardised XML documents (conforming to the OGC Observations and Measurements standard) with a computing back-end realised in the R statistical environment. The probability distribution of interpolation errors is encoded with UncertML, a markup language designed to encode uncertain data. Automatic interpolation needs to be useful for a wide range of applications and the algorithms have been designed to cope with anisotropy, extreme values, and data with known error distributions. Besides a fully automatic mode, the system can be used with different levels of user control over the interpolation process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The popularity of Computing degrees in the UK has been increasing significantly over the past number of years. In Northern Ireland, from 2007 to 2015, there has been a 40% increase in acceptances to Computer Science degrees with England seeing a 60% increase over the same period (UCAS, 2016). However, this is tainted as Computer Science degrees also continue to maintain the highest dropout rates.
In Queen’s University Belfast we currently have a Level 1 intake of over 400 students across a number of computing pathways. Our drive as staff is to empower and motivate the students to fully engage with the course content. All students take a Java programming module the aim of which is to provide an understanding of the basic principles of object-oriented design. In order to assess these skills, we have developed Jigsaw Java as an innovative assessment tool offering intelligent, semi-supervised automated marking of code.
Jigsaw Java allows students to answer programming questions using a drag-and-drop interface to place code fragments into position. Their answer is compared to the sample solution and if it matches, marks are allocated accordingly. However, if a match is not found then the corresponding code is executed using sample data to determine if its logic is acceptable. If it is, the solution is flagged to be checked by staff and if satisfactory is saved as an alternative solution. This means that appropriate marks can be allocated and should another student have submitted the same placement of code fragments this does not need to be executed or checked again. Rather the system now knows how to assess it.
Jigsaw Java is also able to consider partial marks dependent on code placement and will “learn” over time. Given the number of students, Jigsaw Java will improve the consistency and timeliness of marking.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The annotation of Business Dynamics models with parameters and equations, to simulate the system under study and further evaluate its simulation output, typically involves a lot of manual work. In this paper we present an approach for automated equation formulation of a given Causal Loop Diagram (CLD) and a set of associated time series with the help of neural network evolution (NEvo). NEvo enables the automated retrieval of surrogate equations for each quantity in the given CLD, hence it produces a fully annotated CLD that can be used for later simulations to predict future KPI development. In the end of the paper, we provide a detailed evaluation of NEvo on a business use-case to demonstrate its single step prediction capabilities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the main unresolved questions in science is how non-living matter became alive in a process known as abiognesis, which aims to explain how from a primordial soup scenario containing simple molecules, by following a ``bottom up'' approach, complex biomolecules emerged forming the first living system, known as a protocell. A protocell is defined by the interplay of three sub-systems which are considered requirements for life: information molecules, metabolism, and compartmentalization. This thesis investigates the role of compartmentalization during the emergence of life, and how simple membrane aggregates could evolve into entities that were able to develop ``life-like'' behaviours, and in particular how such evolution could happen without the presence of information molecules. Our ultimate objective is to create an autonomous evolvable system, and in order tp do so we will try to engineer life following a ``top-down'' approach, where an initial platform capable of evolving chemistry will be constructed, but the chemistry being dependent on the robotic adjunct, and how then this platform can be de-constructed in iterative operations until it is fully disconnected from the evolvable system, the system then being inherently autonomous. The first project of this thesis describes how the initial platform was designed and built. The platform was based on the model of a standard liquid handling robot, with the main difference with respect to other similar robots being that we used a 3D-printer in order to prototype the robot and build its main equipment, like a liquid dispensing system, tool movement mechanism, and washing procedures. The robot was able to mix different components and create populations of droplets in a Petri dish filled with aqueous phase. The Petri dish was then observed by a camera, which analysed the behaviours described by the droplets and fed this information back to the robot. Using this loop, the robot was then able to implement an evolutionary algorithm, where populations of droplets were evolved towards defined life-like behaviours. The second project of this thesis aimed to remove as many mechanical parts as possible from the robot while keeping the evolvable chemistry intact. In order to do so, we encapsulated the functionalities of the previous liquid handling robot into a single monolithic 3D-printed device. This device was able to mix different components, generate populations of droplets in an aqueous phase, and was also equipped with a camera in order to analyse the experiments. Moreover, because the full fabrication process of the devices happened in a 3D-printer, we were also able to alter its experimental arena by adding different obstacles where to evolve the droplets, enabling us to study how environmental changes can shape evolution. By doing so, we were able to embody evolutionary characteristics into our device, removing constraints from the physical platform, and taking one step forward to a possible autonomous evolvable system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Timely feedback is a vital component in the learning process. It is especially important for beginner students in Information Technology since many have not yet formed an effective internal model of a computer that they can use to construct viable knowledge. Research has shown that learning efficiency is increased if immediate feedback is provided for students. Automatic analysis of student programs has the potential to provide immediate feedback for students and to assist teaching staff in the marking process. This paper describes a “fill in the gap” programming analysis framework which tests students’ solutions and gives feedback on their correctness, detects logic errors and provides hints on how to fix these errors. Currently, the framework is being used with the Environment for Learning to Programming (ELP) system at Queensland University of Technology (QUT); however, the framework can be integrated into any existing online learning environment or programming Integrated Development Environment (IDE)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ordinary desktop computers continue to obtain ever more resources – in-creased processing power, memory, network speed and bandwidth – yet these resources spend much of their time underutilised. Cycle stealing frameworks harness these resources so they can be used for high-performance computing. Traditionally cycle stealing systems have used client-server based architectures which place significant limits on their ability to scale and the range of applica-tions they can support. By applying a fully decentralised network model to cycle stealing the limits of centralised models can be overcome. Using decentralised networks in this manner presents some difficulties which have not been encountered in their previous uses. Generally decentralised ap-plications do not require any significant fault tolerance guarantees. High-performance computing on the other hand requires very stringent guarantees to ensure correct results are obtained. Unfortunately mechanisms developed for traditional high-performance computing cannot be simply translated because of their reliance on a reliable storage mechanism. In the highly dynamic world of P2P computing this reliable storage is not available. As part of this research a fault tolerance system has been created which provides considerable reliability without the need for a persistent storage. As well as increased scalability, fully decentralised networks offer the ability for volunteers to communicate directly. This ability provides the possibility of supporting applications whose tasks require direct, message passing style communication. Previous cycle stealing systems have only supported embarrassingly parallel applications and applications with limited forms of communication so a new programming model has been developed which can support this style of communication within a cycle stealing context. In this thesis I present a fully decentralised cycle stealing framework. The framework addresses the problems of providing a reliable fault tolerance sys-tem and supporting direct communication between parallel tasks. The thesis includes a programming model for developing cycle stealing applications with direct inter-process communication and methods for optimising object locality on decentralised networks.