972 resultados para Automated sorting system


Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECT: Preliminary experience with the C-Port Flex-A Anastomosis System (Cardica, Inc.) to enable rapid automated anastomosis has been reported in coronary artery bypass surgery. The goal of the current study was to define the feasibility and safety of this method for high-flow extracranial-intracranial (EC-IC) bypass surgery in a clinical series. METHODS: In a prospective study design, patients with symptomatic carotid artery (CA) occlusion were selected for C-Port-assisted high-flow EC-IC bypass surgery if they met the following criteria: 1) transient or moderate permanent symptoms of focal ischemia; 2) CA occlusion; 3) hemodynamic instability; and 4) had provided informed consent. Bypasses were done using a radial artery graft that was proximally anastomosed to the superficial temporal artery trunk, the cervical external, or common CA. All distal cerebral anastomoses were performed on M2 branches using the C-Port Flex-A system. RESULTS: Within 6 months, 10 patients were enrolled in the study. The distal automated anastomosis could be accomplished in all patients; the median temporary occlusion time was 16.6+/-3.4 minutes. Intraoperative digital subtraction angiography (DSA) confirmed good bypass function in 9 patients, and in 1 the anastomosis was classified as fair. There was 1 major perioperative complication that consisted of the creation of a pseudoaneurysm due to a hardware problem. In all but 1 case the bypass was shown to be patent on DSA after 7 days; furthermore, in 1 patient a late occlusion developed due to vasospasm after a sylvian hemorrhage. One-week follow-up DSA revealed transient asymptomatic extracranial spasm of the donor artery and the radial artery graft in 1 case. Two patients developed a limited zone of infarction on CT scanning during the follow-up course. CONCLUSIONS: In patients with symptomatic CA occlusion, C-Port Flex-A-assisted high-flow EC-IC bypass surgery is a technically feasible procedure. The system needs further modification to achieve a faster and safer anastomosis to enable a conclusive comparison with standard and laser-assisted methods for high-flow bypass surgery.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

While sound and video may capture viewers' attention, interaction can captivate them. This has not been available prior to the advent of Digital Television. In fact, what lies at the heart of the Digital Television revolution is this new type of interactive content, offered in the form of interactive Television (iTV) services. On top of that, the new world of converged networks has created a demand for a new type of converged services on a range of mobile terminals (Tablet PCs, PDAs and mobile phones). This paper aims at presenting a new approach to service creation that allows for the semi-automatic translation of simulations and rapid prototypes created in the accessible desktop multimedia authoring package Macromedia Director into services ready for broadcast. This is achieved by a series of tools that de-skill and speed-up the process of creating digital TV user interfaces (UI) and applications for mobile terminals. The benefits of rapid prototyping are essential for the production of these new types of services, and are therefore discussed in the first section of this paper. In the following sections, an overview of the operation of content, service, creation and management sub-systems is presented, which illustrates why these tools compose an important and integral part of a system responsible of creating, delivering and managing converged broadcast and telecommunications services. The next section examines a number of metadata languages candidates for describing the iTV services user interface and the schema language adopted in this project. A detailed description of the operation of the two tools is provided to offer an insight of how they can be used to de-skill and speed-up the process of creating digital TV user interfaces and applications for mobile terminals. Finally, representative broadcast oriented and telecommunication oriented converged service components are also introduced, demonstrating how these tools have been used to generate different types of services.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Automatischen Sortiersysteme (Sorter) besitzen in der Intralogistik eine große Bedeutung. Sorter erreichen eine ausdauernd hohe Sortierleistung bei gleichzeitig geringer Fehlsortierrate und bilden deshalb oft den zentralen Baustein in Materialflusssystemen mit hoher Umschlagsrate. Distributionszentren mit Lager und Kommissionierfunktion sind typische Vertreter solcher Materialflusssysteme. Ein Sorter besteht aus den Subsystemen Einschleusung, Verteilförderer und Endstellen. Die folgenden Betrachtungen fokussieren auf ein Sortermodell mit einem Verteilförderer in Ringstruktur und einer Einzelplatzbelegung. Auf jedem Platz kann genau ein Gut transportiert werden. Der Verteilförderer besitzt somit eine feste Transportkapazität. Derartige Förderer werden in der Regel als Kippschalen- oder Quergurt-Sorter ausgeführt. Die theoretische Sortierleistung für diesen Sortertyp kann aus Fahrgeschwindigkeit und Transportplatzabstand bestimmt werden. Diese Systemleistung wird im praktischen Betrieb kaum erreicht. Verschiedene Faktoren im Einschleusbereich und im Ausschleusbereich führen zu einer Leistungsminderung. Betrachtungen zur Bestimmung der mittleren Warteschlangenlänge im Einschleusbereich sowie zur Ermittlung des Rundläuferanteils auf dem Verteilförderer werden im folgenden Beitrag vorgestellt. Diesem Beitrag liegt ein Forschungsvorhaben zugrunde, das aus Mitteln des Bundesministeriums für Wirtschaft und Technologie (BMWi) über die Arbeitsgemeinschaft industrieller Forschungsvereinigungen "Otto von Guericke" (AiF) gefördert und im Auftrage der Bundesvereinigung Logistik e.V. (BVL) ausgeführt wurde.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Chair of Transportation and Ware-housing at the University of Dortmund together with its industrial partner has developed and implemented a decentralized control system based on embedded technology and Internet standards. This innovative, highly flexible system uses autonomous software modules to control the flow of unit loads in real-time. The system is integrated into Chair’s test facility consisting of a wide range of conveying and sorting equipment. It is built for proof of concept purposes and will be used for further research in the fields of decentralized automation and embedded controls. This presentation describes the implementation of this decentralized control system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Utilisation of sorting systems’ maximum performance demands continuous conveying of goods by infeed lines to sorters. This is the case especially for sorters with one single infeed line, because the sorters’ performance is limited by the performance of the infeed line. Within this paper different infeed line constructions at the Rotary Sorter with diverse performances will be presented. The focus lies on a specific conveying system to synchronise the goods with the sorter by a performance of 6000 pieces per hour with one dynamic infeed line. By this means there is no extensive adjustment control of serial conveyors in the infeed line any longer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The successful management of cancer with radiation relies on the accurate deposition of a prescribed dose to a prescribed anatomical volume within the patient. Treatment set-up errors are inevitable because the alignment of field shaping devices with the patient must be repeated daily up to eighty times during the course of a fractionated radiotherapy treatment. With the invention of electronic portal imaging devices (EPIDs), patient's portal images can be visualized daily in real-time after only a small fraction of the radiation dose has been delivered to each treatment field. However, the accuracy of human visual evaluation of low-contrast portal images has been found to be inadequate. The goal of this research is to develop automated image analysis tools to detect both treatment field shape errors and patient anatomy placement errors with an EPID. A moments method has been developed to align treatment field images to compensate for lack of repositioning precision of the image detector. A figure of merit has also been established to verify the shape and rotation of the treatment fields. Following proper alignment of treatment field boundaries, a cross-correlation method has been developed to detect shifts of the patient's anatomy relative to the treatment field boundary. Phantom studies showed that the moments method aligned the radiation fields to within 0.5mm of translation and 0.5$\sp\circ$ of rotation and that the cross-correlation method aligned anatomical structures inside the radiation field to within 1 mm of translation and 1$\sp\circ$ of rotation. A new procedure of generating and using digitally reconstructed radiographs (DRRs) at megavoltage energies as reference images was also investigated. The procedure allowed a direct comparison between a designed treatment portal and the actual patient setup positions detected by an EPID. Phantom studies confirmed the feasibility of the methodology. Both the moments method and the cross-correlation technique were implemented within an experimental radiotherapy picture archival and communication system (RT-PACS) and were used clinically to evaluate the setup variability of two groups of cancer patients treated with and without an alpha-cradle immobilization aid. The tools developed in this project have proven to be very effective and have played an important role in detecting patient alignment errors and field-shape errors in treatment fields formed by a multileaf collimator (MLC). ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Manual counting of bacterial colony forming units (CFUs) on agar plates is laborious and error-prone. We therefore implemented a colony counting system with a novel segmentation algorithm to discriminate bacterial colonies from blood and other agar plates.A colony counter hardware was designed and a novel segmentation algorithm was written in MATLAB. In brief, pre-processing with Top-Hat-filtering to obtain a uniform background was followed by the segmentation step, during which the colony images were extracted from the blood agar and individual colonies were separated. A Bayes classifier was then applied to count the final number of bacterial colonies as some of the colonies could still be concatenated to form larger groups. To assess accuracy and performance of the colony counter, we tested automated colony counting of different agar plates with known CFU numbers of S. pneumoniae, P. aeruginosa and M. catarrhalis and showed excellent performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Electrochemical conversion of xenobiotics has been shown to mimic human phase I metabolism for a few compounds. MATERIALS & METHODS Twenty-one compounds were analyzed with a semiautomated electrochemical setup and mass spectrometry detection. RESULTS The system was able to mimic some metabolic pathways, such as oxygen gain, dealkylation and deiodination, but many of the expected and known metabolites were not produced. CONCLUSION Electrochemical conversion is a useful approach for the preparative synthesis of some types of metabolites, but as a screening method for unknown phase I metabolites, the method is, in our opinion, inferior to incubation with human liver microsomes and in vivo experiments with laboratory animals, for example.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND A precise detection of volume change allows for better estimating the biological behavior of the lung nodules. Postprocessing tools with automated detection, segmentation, and volumetric analysis of lung nodules may expedite radiological processes and give additional confidence to the radiologists. PURPOSE To compare two different postprocessing software algorithms (LMS Lung, Median Technologies; LungCARE®, Siemens) in CT volumetric measurement and to analyze the effect of soft (B30) and hard reconstruction filter (B70) on automated volume measurement. MATERIAL AND METHODS Between January 2010 and April 2010, 45 patients with a total of 113 pulmonary nodules were included. The CT exam was performed on a 64-row multidetector CT scanner (Somatom Sensation, Siemens, Erlangen, Germany) with the following parameters: collimation, 24x1.2 mm; pitch, 1.15; voltage, 120 kVp; reference tube current-time, 100 mAs. Automated volumetric measurement of each lung nodule was performed with the two different postprocessing algorithms based on two reconstruction filters (B30 and B70). The average relative volume measurement difference (VME%) and the limits of agreement between two methods were used for comparison. RESULTS At soft reconstruction filters the LMS system produced mean nodule volumes that were 34.1% (P < 0.0001) larger than those by LungCARE® system. The VME% was 42.2% with a limit of agreement between -53.9% and 138.4%.The volume measurement with soft filters (B30) was significantly larger than with hard filters (B70); 11.2% for LMS and 1.6% for LungCARE®, respectively (both with P < 0.05). LMS measured greater volumes with both filters, 13.6% for soft and 3.8% for hard filters, respectively (P < 0.01 and P > 0.05). CONCLUSION There is a substantial inter-software (LMS/LungCARE®) as well as intra-software variability (B30/B70) in lung nodule volume measurement; therefore, it is mandatory to use the same equipment with the same reconstruction filter for the follow-up of lung nodule volume.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In electroweak-boson production processes with a jet veto, higher-order corrections are enhanced by logarithms of the veto scale over the invariant mass of the boson system. In this paper, we resum these Sudakov logarithms at next-to-next-to-leading logarithmic accuracy and match our predictions to next-to-leading-order (NLO) fixed-order results. We perform the calculation in an automated way, for arbitrary electroweak final states and in the presence of kinematic cuts on the leptons produced in the decays of the electroweak bosons. The resummation is based on a factorization theorem for the cross sections into hard functions, which encode the virtual corrections to the boson production process, and beam functions, which describe the low-pT emissions collinear to the beams. The one-loop hard functions for arbitrary processes are calculated using the MadGraph5_aMC@NLO framework, while the beam functions are process independent. We perform the resummation for a variety of processes, in particular for W+W− pair production followed by leptonic decays of the W bosons.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

AMS-14C applications often require the analysis of small samples. Such is the case of atmospheric aerosols where frequently only a small amount of sample is available. The ion beam physics group at the ETH, Zurich, has designed an Automated Graphitization Equipment (AGE III) for routine graphite production for AMS analysis from organic samples of approximately 1 mg. In this study, we explore the potential use of the AGE III for graphitization of particulate carbon collected in quartz filters. In order to test the methodology, samples of reference materials and blanks with different sizes were prepared in the AGE III and the graphite was analyzed in a MICADAS AMS (ETH) system. The graphite samples prepared in the AGE III showed recovery yields higher than 80% and reproducible 14C values for masses ranging from 50 to 300 lg. Also, reproducible radiocarbon values were obtained for aerosol filters of small sizes that had been graphitized in the AGE III. As a study case, the tested methodology was applied to PM10 samples collected in two urban cities in Mexico in order to compare the source apportionment of biomass and fossil fuel combustion. The obtained 14C data showed that carbonaceous aerosols from Mexico City have much lower biogenic signature than the smaller city of Cuernavaca.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Diabetes mellitus is spreading throughout the world and diabetic individuals have been shown to often assess their food intake inaccurately; therefore, it is a matter of urgency to develop automated diet assessment tools. The recent availability of mobile phones with enhanced capabilities, together with the advances in computer vision, have permitted the development of image analysis apps for the automated assessment of meals. GoCARB is a mobile phone-based system designed to support individuals with type 1 diabetes during daily carbohydrate estimation. In a typical scenario, the user places a reference card next to the dish and acquires two images using a mobile phone. A series of computer vision modules detect the plate and automatically segment and recognize the different food items, while their 3D shape is reconstructed. Finally, the carbohydrate content is calculated by combining the volume of each food item with the nutritional information provided by the USDA Nutrient Database for Standard Reference. Objective: The main objective of this study is to assess the accuracy of the GoCARB prototype when used by individuals with type 1 diabetes and to compare it to their own performance in carbohydrate counting. In addition, the user experience and usability of the system is evaluated by questionnaires. Methods: The study was conducted at the Bern University Hospital, “Inselspital” (Bern, Switzerland) and involved 19 adult volunteers with type 1 diabetes, each participating once. Each study day, a total of six meals of broad diversity were taken from the hospital’s restaurant and presented to the participants. The food items were weighed on a standard balance and the true amount of carbohydrate was calculated from the USDA nutrient database. Participants were asked to count the carbohydrate content of each meal independently and then by using GoCARB. At the end of each session, a questionnaire was completed to assess the user’s experience with GoCARB. Results: The mean absolute error was 27.89 (SD 38.20) grams of carbohydrate for the estimation of participants, whereas the corresponding value for the GoCARB system was 12.28 (SD 9.56) grams of carbohydrate, which was a significantly better performance ( P=.001). In 75.4% (86/114) of the meals, the GoCARB automatic segmentation was successful and 85.1% (291/342) of individual food items were successfully recognized. Most participants found GoCARB easy to use. Conclusions: This study indicates that the system is able to estimate, on average, the carbohydrate content of meals with higher accuracy than individuals with type 1 diabetes can. The participants thought the app was useful and easy to use. GoCARB seems to be a well-accepted supportive mHealth tool for the assessment of served-on-a-plate meals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Academic and industrial research in the late 90s have brought about an exponential explosion of DNA sequence data. Automated expert systems are being created to help biologists to extract patterns, trends and links from this ever-deepening ocean of information. Two such systems aimed on retrieving and subsequently utilizing phylogenetically relevant information have been developed in this dissertation, the major objective of which was to automate the often difficult and confusing phylogenetic reconstruction process. ^ Popular phylogenetic reconstruction methods, such as distance-based methods, attempt to find an optimal tree topology (that reflects the relationships among related sequences and their evolutionary history) by searching through the topology space. Various compromises between the fast (but incomplete) and exhaustive (but computationally prohibitive) search heuristics have been suggested. An intelligent compromise algorithm that relies on a flexible “beam” search principle from the Artificial Intelligence domain and uses the pre-computed local topology reliability information to adjust the beam search space continuously is described in the second chapter of this dissertation. ^ However, sometimes even a (virtually) complete distance-based method is inferior to the significantly more elaborate (and computationally expensive) maximum likelihood (ML) method. In fact, depending on the nature of the sequence data in question either method might prove to be superior. Therefore, it is difficult (even for an expert) to tell a priori which phylogenetic reconstruction method—distance-based, ML or maybe maximum parsimony (MP)—should be chosen for any particular data set. ^ A number of factors, often hidden, influence the performance of a method. For example, it is generally understood that for a phylogenetically “difficult” data set more sophisticated methods (e.g., ML) tend to be more effective and thus should be chosen. However, it is the interplay of many factors that one needs to consider in order to avoid choosing an inferior method (potentially a costly mistake, both in terms of computational expenses and in terms of reconstruction accuracy.) ^ Chapter III of this dissertation details a phylogenetic reconstruction expert system that selects a superior proper method automatically. It uses a classifier (a Decision Tree-inducing algorithm) to map a new data set to the proper phylogenetic reconstruction method. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sampling was conducted from March 24 to August 5 2010, in the fjord branch Kapisigdlit located in the inner part of the Godthåbsfjord system, West Greenland. The vessel "Lille Masik" was used during all cruises except on June 17-18 where sampling was done from RV Dana (National Institute for Aquatic Resources, Denmark). A total of 15 cruises (of 1-2 days duration) 7-10 days apart was carried out along a transect composed of 6 stations (St.), spanning the length of the 26 km long fjord branch. St. 1 was located at the mouth of the fjord branch and St. 6 was located at the end of the fjord branch, in the middle of a shallower inner creek . St. 1-4 was covering deeper parts of the fjord, and St. 5 was located on the slope leading up to the shallow inner creek. Mesozooplankton was sampled by vertical net tows using a Hydrobios Multinet (type Mini) equipped with a flow meter and 50 µm mesh nets or a WP-2 net 50 µm mesh size equipped with a non-filtering cod-end. Sampling was conducted at various times of day at the different stations. The nets were hauled with a speed of 0.2-0.3 m s**-1 from 100, 75 and 50 m depth to the surface at St. 2 + 4, 5 and 6, respectively. The content was immediately preserved in buffered formalin (4% final concentration). All samples were analyzed in the Plankton sorting and identification center in Szczecin (www.nmfri.gdynia.pl). Samples containing high numbers of zooplankton were split into subsamples. All copepods and other zooplankton were identified down to lowest possible taxonomic level (approx. 400 per sample), length measured and counted. Copepods were sorted into development stages (nauplii stage 1 - copepodite stage 6) using morphological features and sizes, and up to 10 individuals of each stage was length measured.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study provides a theoretical assessment of the potential bias due to differential lateral transport on multi-proxy studies based on a range of marine microfossils. Microfossils preserved in marine sediments are at the centre of numerous proxies for paleoenvironmental reconstructions. The precision of proxies is based on the assumption that they accurately represent the overlying watercolumn properties and faunas. Here we assess the possibility of a syn-depositional bias in sediment assemblages caused by horizontal drift in the water column, due to differential settling velocities of sedimenting particles based on their shape, size and density, and due to differences in current velocities. Specifically we calculate the post-mortem lateral transport undergone by planktic foraminifera and a range of other biological proxy carriers (diatoms, radiolaria and fecal pellets transporting coccolithophores) in several regions with high current velocities. We find that lateral transport of different planktic foraminiferal species is minimal due to high settling velocities. No significant shape- or size-dependent sorting occurs before reaching the sediment, making planktic foraminiferal ideal proxy carriers. In contrast, diatoms, radiolaria and fecal pellets can be transported up to 500km in some areas. For example in the Agulhas current, transport can lead to differences of up to 2°C in temperature reconstructions between different proxies in response to settling velocities. Therefore, sediment samples are likely to contain different proportions of local and imported particles, decreasing the precision of proxies based on these groups and the accuracy of the temperature reconstruction.