976 resultados para pacs: document processing techniques


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Therapeutic plasmapheresis allows the extracorporeal removal of plasmatic lipoproteins (Lipid-apheresis) (LA). It can be non selective (non specific), semi - selective or selective low density lipoprotein-lipoprotein(a) (specific [LDL- Lp(a)] apheresis) (Lipoprotein apheresis, LDLa). The LDL removal rate is a perfect parameter to assess the system efficiency. Plasma-Exchange (PEX) cannot be considered either specific nor, selective. In PEX the whole blood is separated into plasma and its corpuscular components usually through centrifugation or rather filtration. The corpuscular components mixed with albumin solution plus saline (NaCl 0.9%) solution at 20%-25%, are then reinfused to the patient, to substitute the plasma formerly removed. PEX eliminates atherogenic lipoproteins, but also other essential plasma proteins, such as albumin, immunoglobulins, and hemocoagulatory mediators. Cascade filtration (CF) is a method based on plasma separation and removal of plasma proteins through double filtration. During the CF two hollow–fiber filters with pores of different diameter are used to eliminate the plasma components of different weight and molecular diameter. A CF system uses a first polypropylene filter with 0.55 µm diameter pores and a second one of diacetate of cellulose with 0.02 µm pores. The first filter separates the whole blood, and the plasma is then perfused through a second filter which allows the recovery of molecules with a diameter lower than 0.02 µm, and the removal of molecules larger in diameter as apoB100–containing lipoproteins. Since both albumin and immunoglobulins are not removed, or to a negligible extent, plasma-expanders, substitution fluids, and in particular albumin, as occurs in PEX are not needed. CF however, is characterized by lower selectivity since removes also high density lipoprotein (HDL) particles which have an antiatherogenic activity. In the 80’s, a variation of Lipid-apheresis has been developed which allows the LDL-cholesterol (LDLC) (-61%) and Lp(a) (-60%) removal from plasma through processing 3 liters of filtered plasma by means of lipid-specific thermofiltration, LDL immunoadsorption, heparin-induced LDL precipitation, LDL adsorption through dextran sulphate. More recently (90’s) the DALI®, and the Liposorber D® hemoperfusion systems, effective for apoB100- containing lipoproteins removal have been developed. All the above mentioned systems are established LDL-apheresis techniques referable to the generic definition of LDLa. However, this last definition cannot describe in an appropriate manner the removal of another highly atherogenic lipoprotein particle: the Lp(a). Thus it would be better to refer the above mentioned techniques to the wider scientific and technical concept of lipoprotein apheresis. Lipid apheresis - Lipoprotein apheresis - LDL-apheresis - Severe Dyslipidemia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

F-123-R; issued June 1, 1997; two different reports were issued from the Center for Aquatic Ecology with report number 1997 (9)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of the term "Electronic Publishing" transcends any notions of the paperless office and of a purely electronic transfer and dissemination of information over networks. It now encompasses all computer-assisted methods for the production of documents and includes the imaging of a document on paper as one of the options to be provided by an integrated processing scheme. Electronic publishing draws heavily on techniques from computer science and information technology but technical, legal, financial and organisational problems have to be overcome before it can replace traditional publication mechanisms. These problems are illustrated with reference to the publication arrangements for the journal `Electronic Publishing Origination, Dissemination and Design'. The authors of this paper are the co-editors of this journal, which appears in traditional form and relies on a wide variety of support from electronic technologies in the pre-publication phase.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In order to optimize frontal detection in sea surface temperature fields at 4 km resolution, a combined statistical and expert-based approach is applied to test different spatial smoothing of the data prior to the detection process. Fronts are usually detected at 1 km resolution using the histogram-based, single image edge detection (SIED) algorithm developed by Cayula and Cornillon in 1992, with a standard preliminary smoothing using a median filter and a 3 × 3 pixel kernel. Here, detections are performed in three study regions (off Morocco, the Mozambique Channel, and north-western Australia) and across the Indian Ocean basin using the combination of multiple windows (CMW) method developed by Nieto, Demarcq and McClatchie in 2012 which improves on the original Cayula and Cornillon algorithm. Detections at 4 km and 1 km of resolution are compared. Fronts are divided in two intensity classes (“weak” and “strong”) according to their thermal gradient. A preliminary smoothing is applied prior to the detection using different convolutions: three type of filters (median, average and Gaussian) combined with four kernel sizes (3 × 3, 5 × 5, 7 × 7, and 9 × 9 pixels) and three detection window sizes (16 × 16, 24 × 24 and 32 × 32 pixels) to test the effect of these smoothing combinations on reducing the background noise of the data and therefore on improving the frontal detection. The performance of the combinations on 4 km data are evaluated using two criteria: detection efficiency and front length. We find that the optimal combination of preliminary smoothing parameters in enhancing detection efficiency and preserving front length includes a median filter, a 16 × 16 pixel window size, and a 5 × 5 pixel kernel for strong fronts and a 7 × 7 pixel kernel for weak fronts. Results show an improvement in detection performance (from largest to smallest window size) of 71% for strong fronts and 120% for weak fronts. Despite the small window used (16 × 16 pixels), the length of the fronts has been preserved relative to that found with 1 km data. This optimal preliminary smoothing and the CMW detection algorithm on 4 km sea surface temperature data are then used to describe the spatial distribution of the monthly frequencies of occurrence for both strong and weak fronts across the Indian Ocean basin. In general strong fronts are observed in coastal areas whereas weak fronts, with some seasonal exceptions, are mainly located in the open ocean. This study shows that adequate noise reduction done by a preliminary smoothing of the data considerably improves the frontal detection efficiency as well as the global quality of the results. Consequently, the use of 4 km data enables frontal detections similar to 1 km data (using a standard median 3 × 3 convolution) in terms of detectability, length and location. This method, using 4 km data is easily applicable to large regions or at the global scale with far less constraints of data manipulation and processing time relative to 1 km data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Division of Fisheries, Illinois Department of Natural Resources Grant/Contract No: Federal Aid Project F-123 R-15

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Edge-labeled graphs have proliferated rapidly over the last decade due to the increased popularity of social networks and the Semantic Web. In social networks, relationships between people are represented by edges and each edge is labeled with a semantic annotation. Hence, a huge single graph can express many different relationships between entities. The Semantic Web represents each single fragment of knowledge as a triple (subject, predicate, object), which is conceptually identical to an edge from subject to object labeled with predicates. A set of triples constitutes an edge-labeled graph on which knowledge inference is performed. Subgraph matching has been extensively used as a query language for patterns in the context of edge-labeled graphs. For example, in social networks, users can specify a subgraph matching query to find all people that have certain neighborhood relationships. Heavily used fragments of the SPARQL query language for the Semantic Web and graph queries of other graph DBMS can also be viewed as subgraph matching over large graphs. Though subgraph matching has been extensively studied as a query paradigm in the Semantic Web and in social networks, a user can get a large number of answers in response to a query. These answers can be shown to the user in accordance with an importance ranking. In this thesis proposal, we present four different scoring models along with scalable algorithms to find the top-k answers via a suite of intelligent pruning techniques. The suggested models consist of a practically important subset of the SPARQL query language augmented with some additional useful features. The first model called Substitution Importance Query (SIQ) identifies the top-k answers whose scores are calculated from matched vertices' properties in each answer in accordance with a user-specified notion of importance. The second model called Vertex Importance Query (VIQ) identifies important vertices in accordance with a user-defined scoring method that builds on top of various subgraphs articulated by the user. Approximate Importance Query (AIQ), our third model, allows partial and inexact matchings and returns top-k of them with a user-specified approximation terms and scoring functions. In the fourth model called Probabilistic Importance Query (PIQ), a query consists of several sub-blocks: one mandatory block that must be mapped and other blocks that can be opportunistically mapped. The probability is calculated from various aspects of answers such as the number of mapped blocks, vertices' properties in each block and so on and the most top-k probable answers are returned. An important distinguishing feature of our work is that we allow the user a huge amount of freedom in specifying: (i) what pattern and approximation he considers important, (ii) how to score answers - irrespective of whether they are vertices or substitution, and (iii) how to combine and aggregate scores generated by multiple patterns and/or multiple substitutions. Because so much power is given to the user, indexing is more challenging than in situations where additional restrictions are imposed on the queries the user can ask. The proposed algorithms for the first model can also be used for answering SPARQL queries with ORDER BY and LIMIT, and the method for the second model also works for SPARQL queries with GROUP BY, ORDER BY and LIMIT. We test our algorithms on multiple real-world graph databases, showing that our algorithms are far more efficient than popular triple stores.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main task is to analyze the state of the art of grating couplers production and low-cost polymer substrates. Then to provide a recommendation of a new or adapted process for the production of metallic gratings on polymer sheets, based on a Failure Mode and Effect Analysis (FMEA). In order to achieve that, this thesis is divided into four chapters. After the first introductory chapter, the second section provides details about the state-of-the-art in optical technology platforms with focus on polymers and their main features for the aimed application, such as flexibility, low cost and roll to roll compatibility. It defines then the diffraction gratings and their specifications and closes with the explanation of adhesion mechanisms of inorganic materials on polymer substrates. The third chapter discusses processing of grating couplers. It introduces the basic fabrication methods and details a selection of current fabrication schemes found in literature with an assessment of their potential use for the desired application. The last chapter is a FMEA analysis of the retained fabrication process, called Flip and Fuse, in order to check its capability to realize the grating structure.