146 resultados para thrombocyte volume
Resumo:
A system is described for calculating volume from a sequence of multiplanar 2D ultrasound images. Ultrasound images are captured using a video digitising card (Hauppauge Win/TV card) installed in a personal computer, and regions of interest transformed into 3D space using position and orientation data obtained from an electromagnetic device (Polbemus, Fastrak). The accuracy of the system was assessed by scanning 10 water filled balloons (13-141 ml), 10 kidneys (147 200 ml) and 16 fetal livers (8 37 ml) in water using an Acuson 128XP/10 (5 MHz curvilinear probe). Volume was calculated using the ellipsoid, planimetry, tetrahedral and ray tracing methods and compared with the actual volume measured by weighing (balloons) and water displacement (kidneys and livers). The mean percentage error for the ray tracing method was 0.9 ± 2.4%, 2.7 ± 2.3%, 6.6 ± 5.4% for balloons, kidneys and livers, respectively. So far the system has been used clinically to scan fetal livers and lungs, neonate brain ventricles and adult prostate glands.
Resumo:
A new system is described for estimating volume from a series of multiplanar 2D ultrasound images. Ultrasound images are captured using a personal computer video digitizing card and an electromagnetic localization system is used to record the pose of the ultrasound images. The accuracy of the system was assessed by scanning four groups of ten cadaveric kidneys on four different ultrasound machines. Scan image planes were oriented either radially, in parallel or slanted at 30 C to the vertical. The cross-sectional images of the kidneys were traced using a mouse and the outline points transformed to 3D space using the Fastrak position and orientation data. Points on adjacent region of interest outlines were connected to form a triangle mesh and the volume of the kidneys estimated using the ellipsoid, planimetry, tetrahedral and ray tracing methods. There was little difference between the results for the different scan techniques or volume estimation algorithms, although, perhaps as expected, the ellipsoid results were the least precise. For radial scanning and ray tracing, the mean and standard deviation of the percentage errors for the four different machines were as follows: Hitachi EUB-240, −3.0 ± 2.7%; Tosbee RM3, −0.1 ± 2.3%; Hitachi EUB-415, 0.2 ± 2.3%; Acuson, 2.7 ± 2.3%.
Resumo:
Sixteen formalin-fixed foetal livers were scanned in vitro using a new system for estimating volume from a sequence of multiplanar 2D ultrasound images. Three different scan techniques were used (radial, parallel and slanted) and four volume estimation algorithms (ellipsoid, planimetry, tetrahedral and ray tracing). Actual liver volumes were measured by water displacement. Twelve of the sixteen livers also received x-ray computed tomography (CT) and magnetic resonance (MR) scans and the volumes were calculated using voxel counting and planimetry. The percentage accuracy (mean ± SD) was 5.3 ± 4.7%, −3.1 ± 9.6% and −0.03 ± 9.7% for ultrasound (radial scans, ray volumes), MR and CT (voxel counting) respectively. The new system may be useful for accurately estimating foetal liver volume in utero.
Resumo:
In this video, words describing socially awkward conversations float around an animated cloud of gas. A cosmic stock music track accompanies the words. This work examines processes of signification. It emphasizes multiplicity and disconnection as fundamental and generative operations in making meaning. By playing with the simultaneity of internal monologues and external conversations, it draws attention to the seams, gaps and slippages that occur in signifying acts.
Resumo:
Nowadays, Workflow Management Systems (WfMSs) and, more generally, Process Management Systems (PMPs) are process-aware Information Systems (PAISs), are widely used to support many human organizational activities, ranging from well-understood, relatively stable and structures processes (supply chain management, postal delivery tracking, etc.) to processes that are more complicated, less structured and may exhibit a high degree of variation (health-care, emergency management, etc.). Every aspect of a business process involves a certain amount of knowledge which may be complex depending on the domain of interest. The adequate representation of this knowledge is determined by the modeling language used. Some processes behave in a way that is well understood, predictable and repeatable: the tasks are clearly delineated and the control flow is straightforward. Recent discussions, however, illustrate the increasing demand for solutions for knowledge-intensive processes, where these characteristics are less applicable. The actors involved in the conduct of a knowledge-intensive process have to deal with a high degree of uncertainty. Tasks may be hard to perform and the order in which they need to be performed may be highly variable. Modeling knowledge-intensive processes can be complex as it may be hard to capture at design-time what knowledge is available at run-time. In realistic environments, for example, actors lack important knowledge at execution time or this knowledge can become obsolete as the process progresses. Even if each actor (at some point) has perfect knowledge of the world, it may not be certain of its beliefs at later points in time, since tasks by other actors may change the world without those changes being perceived. Typically, a knowledge-intensive process cannot be adequately modeled by classical, state of the art process/workflow modeling approaches. In some respect there is a lack of maturity when it comes to capturing the semantic aspects involved, both in terms of reasoning about them. The main focus of the 1st International Workshop on Knowledge-intensive Business processes (KiBP 2012) was investigating how techniques from different fields, such as Artificial Intelligence (AI), Knowledge Representation (KR), Business Process Management (BPM), Service Oriented Computing (SOC), etc., can be combined with the aim of improving the modeling and the enactment phases of a knowledge-intensive process. The 1st International Workshop on Knowledge-intensive Business process (KiBP 2012) was held as part of the program of the 2012 Knowledge Representation & Reasoning International Conference (KR 2012) in Rome, Italy, in June 2012. The workshop was hosted by the Dipartimento di Ingegneria Informatica, Automatica e Gestionale Antonio Ruberti of Sapienza Universita di Roma, with financial support of the University, through grant 2010-C26A107CN9 TESTMED, and the EU Commission through the projects FP7-25888 Greener Buildings and FP7-257899 Smart Vortex. This volume contains the 5 papers accepted and presented at the workshop. Each paper was reviewed by three members of the internationally renowned Program Committee. In addition, a further paper was invted for inclusion in the workshop proceedings and for presentation at the workshop. There were two keynote talks, one by Marlon Dumas (Institute of Computer Science, University of Tartu, Estonia) on "Integrated Data and Process Management: Finally?" and the other by Yves Lesperance (Department of Computer Science and Engineering, York University, Canada) on "A Logic-Based Approach to Business Processes Customization" completed the scientific program. We would like to thank all the Program Committee members for the valuable work in selecting the papers, Andrea Marrella for his valuable work as publication and publicity chair of the workshop, and Carola Aiello and the consulting agency Consulta Umbria for the organization of this successful event.
A finite volume method for solving the two-sided time-space fractional advection-dispersion equation
Resumo:
The field of fractional differential equations provides a means for modelling transport processes within complex media which are governed by anomalous transport. Indeed, the application to anomalous transport has been a significant driving force behind the rapid growth and expansion of the literature in the field of fractional calculus. In this paper, we present a finite volume method to solve the time-space two-sided fractional advection dispersion equation on a one-dimensional domain. Such an equation allows modelling different flow regime impacts from either side. The finite volume formulation provides a natural way to handle fractional advection-dispersion equations written in conservative form. The novel spatial discretisation employs fractionally-shifted Gr¨unwald formulas to discretise the Riemann-Liouville fractional derivatives at control volume faces in terms of function values at the nodes, while the L1-algorithm is used to discretise the Caputo time fractional derivative. Results of numerical experiments are presented to demonstrate the effectiveness of the approach.
Resumo:
In late 1993 the Federal Government required the Industry Commission to inquire into charitable organisations. We have previously raised issues about the scope and nature of the inquiry process. These issues are: - the appropriateness of the Commission to undertake the inquiry, - the limited time span given the breadth of the inquiry, - and the non-explicit disclosure of the intellectual framework and methodology to be employed in the inquiry.
Resumo:
A vertex-centred finite volume method (FVM) for the Cahn-Hilliard (CH) and recently proposed Cahn-Hilliard-reaction (CHR) equations is presented. Information at control volume faces is computed using a high-order least-squares approach based on Taylor series approximations. This least-squares problem explicitly includes the variational boundary condition (VBC) that ensures that the discrete equations satisfy all of the boundary conditions. We use this approach to solve the CH and CHR equations in one and two dimensions and show that our scheme satisfies the VBC to at least second order. For the CH equation we show evidence of conservative, gradient stable solutions, however for the CHR equation, strict gradient-stability is more challenging to achieve.
Resumo:
OBJECTIVE: : Acute traumatic coagulopathy occurs early in hemorrhagic trauma and is a major contributor to mortality and morbidity. Our aim was to examine the effect of small-volume 7.5% NaCl adenocaine (adenosine and lidocaine, adenocaine) and Mg on hypotensive resuscitation and coagulopathy in the rat model of severe hemorrhagic shock. DESIGN: : Prospective randomized laboratory investigation. SUBJECTS: : A total of 68 male Sprague Dawley Rats. INTERVENTION: : Post-hemorrhagic shock treatment for acute traumatic coagulopathy. MEASUREMENTS AND METHODS: : Nonheparinized male Sprague-Dawley rats (300-450 g, n = 68) were randomly assigned to either: 1) untreated; 2) 7.5% NaCl; 3) 7.5% NaCl adenocaine; 4) 7.5% NaCl Mg; or 5) 7.5% NaCl adenocaine/Mg. Hemorrhagic shock was induced by phlebotomy to mean arterial pressure of 35-40 mm Hg for 20 mins (~40% blood loss), and animals were left in shock for 60 mins. Bolus (0.3 mL) was injected into the femoral vein and hemodynamics monitored. Blood was collected in Na citrate (3.2%) tubes, centrifuged, and the plasma snap frozen in liquid N2 and stored at -80°C. Coagulation was assessed using activated partial thromboplastin times and prothrombin times. RESULTS: : Small-volume 7.5% NaCl adenocaine and 7.5% NaCl adenocaine/Mg were the only two groups that gradually increased mean arterial pressure 1.6-fold from 38-39 mm Hg to 52 and 64 mm Hg, respectively, at 60 mins (p < .05). Baseline plasma activated partial thromboplastin time was 17 ± 0.5 secs and increased to 63 ± 21 secs after bleeding time, and 217 ± 32 secs after 60-min shock. At 60-min resuscitation, activated partial thromboplastin time values for untreated, 7.5% NaCl, 7.5% NaCl/Mg, and 7.5% NaCl adenocaine rats were 269 ± 31 secs, 262 ± 38 secs, 150 ± 43 secs, and 244 ± 38 secs, respectively. In contrast, activated partial thromboplastin time for 7.5% NaCl adenocaine/Mg was 24 ± 2 secs (p < .05). Baseline prothrombin time was 28 ± 0.8 secs (n = 8) and followed a similar pattern of correction. CONCLUSIONS: : Plasma activated partial thromboplastin time and prothrombin time increased over 10-fold during the bleed and shock periods prior to resuscitation, and a small-volume (~1 mL/kg) IV bolus of 7.5% NaCl AL/Mg was the only treatment group that raised mean arterial pressure into the permissive range and returned activated partial thromboplastin time and prothrombin time clotting times to baseline at 60 mins.
Resumo:
This is volume 1 in a series of four volumes about the origins of Australian football as it evolved in Victoria between 1858 and 1896. This volume addresses its very beginnings as an amateur sport and the rise of the first clubs. Invented by a group of Melbourne cricketers and sports enthusiasts, Australian Rules football was developed through games played on Melbourne's park lands and was originally known as "Melbourne Football Club Rules". This formative period of the game saw the birth of the first 'amateur heroes' of the game. Players such as T.W. Wills, H.C.A. Harrison, Jack Conway, George O'Mullane and Robert Murray Smith emerged as warriors engaged in individual rugby-type scrimmages. The introduction of Challenge Cups was an important spur for this burgeoning sport. Intense competition and growing rivalries between clubs such as Melbourne, South Yarra, Royal Park, and Geelong began to flourish and the game developed as a result. By the 1870s the game "Victorian Rules" had become the most popular outdoor winter sport across the state. In subsequent decades, rapid growth in club football occurred and the game attracted increasing media attention.
Resumo:
Fast calculation of quantities such as in-cylinder volume and indicated power is important in internal combustion engine research. Multiple channels of data including crank angle and pressure were collected for this purpose using a fully instrumented diesel engine research facility. Currently, existing methods use software to post-process the data, first calculating volume from crank angle, then calculating the indicated work and indicated power from the area enclosed by the pressure-volume indicator diagram. Instead, this work investigates the feasibility of achieving real-time calculation of volume and power via hardware implementation on Field Programmable Gate Arrays (FPGAs). Alternative hardware implementations were investigated using lookup tables, Taylor series methods or the CORDIC (CoOrdinate Rotation DIgital Computer) algorithm to compute the trigonometric operations in the crank angle to volume calculation, and the CORDIC algorithm was found to use the least amount of resources. Simulation of the hardware based implementation showed that the error in the volume and indicated power is less than 0.1%.
Resumo:
Purpose: To assess the effects of pre-cooling volume on neuromuscular function and performance in free-paced intermittent-sprint exercise in the heat. Methods: Ten male, teamsport athletes completed four randomized trials involving an 85-min free-paced intermittentsprint exercise protocol in 33°C±33% relative humidity. Pre-cooling sessions included whole body (WB), head+hand (HH), head (H) and no cooling (CONT), applied for 20-min pre-exercise and 5-min mid exercise. Maximal voluntary contractions (MVC) were assessed pre- and postintervention and mid- and post-exercise. Exercise performance was assessed with sprint times, % decline and distances covered during free-paced bouts. Measures of core(Tc) and skin (Tsk) temperatures, heart rate, perceptual exertion and thermal stress were monitored throughout. Venous and capillary blood was analyzed for metabolite, muscle damage and inflammatory markers. Results: WB pre-cooling facilitated the maintenance of sprint times during the exercise protocol with reduced % decline (P=0.04). Mean and total hard running distances increased with pre cooling 12% compared to CONT (P<0.05), specifically, WB was 6-7% greater than HH (P=0.02) and H (P=0.001) respectively. No change was evident in mean voluntary or evoked force pre- to post-exercise with WB and HH cooling (P>0.05). WB and HH cooling reduced Tc by 0.1-0.3°C compared to other conditions (P<0.05). WB Tsk was suppressed for the entire session(P=0.001). HR responses following WB cooling were reduced(P=0.05; d=1.07) compared to CONT conditions during exercise. Conclusion: A relationship between pre-cooling volume and exercise performance seems apparent, as larger surface area coverage augmented subsequent free-paced exercise capacity, in conjunction with greater suppression of physiological load. Maintenance of MVC with pre-cooling, despite increased work output suggests the role of centrally-mediated mechanisms in exercise pacing regulation and subsequent performance.