945 resultados para (2D)2PCA


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Prostate cancer (CaP) is the second leading cause of cancer-related deaths in North American males and the most common newly diagnosed cancer in men world wide. Biomarkers are widely used for both early detection and prognostic tests for cancer. The current, commonly used biomarker for CaP is serum prostate specific antigen (PSA). However, the specificity of this biomarker is low as its serum level is not only increased in CaP but also in various other diseases, with age and even body mass index. Human body fluids provide an excellent resource for the discovery of biomarkers, with the advantage over tissue/biopsy samples of their ease of access, due to the less invasive nature of collection. However, their analysis presents challenges in terms of variability and validation. Blood and urine are two human body fluids commonly used for CaP research, but their proteomic analyses are limited both by the large dynamic range of protein abundance making detection of low abundance proteins difficult and in the case of urine, by the high salt concentration. To overcome these challenges, different techniques for removal of high abundance proteins and enrichment of low abundance proteins are used. Their applications and limitations are discussed in this review. A number of innovative proteomic techniques have improved detection of biomarkers. They include two dimensional differential gel electrophoresis (2D-DIGE), quantitative mass spectrometry (MS) and functional proteomic studies, i.e., investigating the association of post translational modifications (PTMs) such as phosphorylation, glycosylation and protein degradation. The recent development of quantitative MS techniques such as stable isotope labeling with amino acids in cell culture (SILAC), isobaric tags for relative and absolute quantitation (iTRAQ) and multiple reaction monitoring (MRM) have allowed proteomic researchers to quantitatively compare data from different samples. 2D-DIGE has greatly improved the statistical power of classical 2D gel analysis by introducing an internal control. This chapter aims to review novel CaP biomarkers as well as to discuss current trends in biomarker research from two angles: the source of biomarkers (particularly human body fluids such as blood and urine), and emerging proteomic approaches for biomarker research.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Typical flow fields in a stormwater gross pollutant trap (GPT) with blocked retaining screens were experimentally captured and visualised. Particle image velocimetry (PIV) software was used to capture the flow field data by tracking neutrally buoyant particles with a high speed camera. A technique was developed to apply the Image Based Flow Visualization (IBFV) algorithm to the experimental raw dataset generated by the PIV software. The dataset consisted of scattered 2D point velocity vectors and the IBFV visualisation facilitates flow feature characterisation within the GPT. The flow features played a pivotal role in understanding gross pollutant capture and retention within the GPT. It was found that the IBFV animations revealed otherwise unnoticed flow features and experimental artefacts. For example, a circular tracer marker in the IBFV program visually highlighted streamlines to investigate specific areas and identify the flow features within the GPT.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hematopoietic stem cell (HSC) transplant is a well established curative therapy for some hematological malignancies. However, achieving adequate supply of HSC from some donor tissues can limit both its application and ultimate efficacy. The theory that this limitation could be overcome by expanding the HSC population before transplantation has motivated numerous laboratories to develop ex vivo expansion processes. Pioneering work in this field utilized stromal cells as support cells in cocultures with HSC to mimic the HSC niche. We hypothesized that through translation of this classic coculture system to a three-dimensional (3D) structure we could better replicate the niche environment and in turn enhance HSC expansion. Herein we describe a novel high-throughput 3D coculture system where murine-derived HSC can be cocultured with mesenchymal stem/stromal cells (MSC) in 3D microaggregates—which we term “micromarrows.” Micromarrows were formed using surface modified microwells and their ability to support HSC expansion was compared to classic two-dimensional (2D) cocultures. While both 2D and 3D systems provide only a modest total cell expansion in the minimally supplemented medium, the micromarrow system supported the expansion of approximately twice as many HSC candidates as the 2D controls. Histology revealed that at day 7, the majority of bound hematopoietic cells reside in the outer layers of the aggregate. Quantitative polymerase chain reaction demonstrates that MSC maintained in 3D aggregates express significantly higher levels of key hematopoietic niche factors relative to their 2D equivalents. Thus, we propose that the micromarrow platform represents a promising first step toward a high-throughput HSC 3D coculture system that may enable in vitro HSC niche recapitulation and subsequent extensive in vitro HSC self-renewal.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The rapid increase in the deployment of CCTV systems has led to a greater demand for algorithms that are able to process incoming video feeds. These algorithms are designed to extract information of interest for human operators. During the past several years, there has been a large effort to detect abnormal activities through computer vision techniques. Typically, the problem is formulated as a novelty detection task where the system is trained on normal data and is required to detect events which do not fit the learned `normal' model. Many researchers have tried various sets of features to train different learning models to detect abnormal behaviour in video footage. In this work we propose using a Semi-2D Hidden Markov Model (HMM) to model the normal activities of people. The outliers of the model with insufficient likelihood are identified as abnormal activities. Our Semi-2D HMM is designed to model both the temporal and spatial causalities of the crowd behaviour by assuming the current state of the Hidden Markov Model depends not only on the previous state in the temporal direction, but also on the previous states of the adjacent spatial locations. Two different HMMs are trained to model both the vertical and horizontal spatial causal information. Location features, flow features and optical flow textures are used as the features for the model. The proposed approach is evaluated using the publicly available UCSD datasets and we demonstrate improved performance compared to other state of the art methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Flood related scientific and community-based data are rarely systematically collected and analysed in the Philippines. Over the last decades the Pagsangaan River Basin, Leyte, has experienced several flood events. However, documentation describing flood characteristics such as extent, duration or height of these floods are close to non-existing. To address this issue, computerized flood modelling was used to reproduce past events where there was data available for at least partial calibration and validation. The model was also used to provide scenario-based predictions based on A1B climate change assumptions for the area. The most important input for flood modelling is a Digital Elevation Model (DEM) of the river basin. No accurate topographic maps or Light Detection And Ranging (LIDAR)-generated data are available for the Pagsangaan River. Therefore, the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) Global Digital Elevation Map (GDEM), Version 1, was chosen as the DEM. Although the horizontal spatial resolution of 30 m is rather desirable, it contains substantial vertical errors. These were identified, different correction methods were tested and the resulting DEM was used for flood modelling. The above mentioned data were combined with cross-sections at various strategic locations of the river network, meteorological records, river water level, and current velocity to develop the 1D-2D flood model. SOBEK was used as modelling software to create different rainfall scenarios, including historic flooding events. Due to the lack of scientific data for the verification of the model quality, interviews with local stakeholders served as the gauge to judge the quality of the generated flood maps. According to interviewees, the model reflects reality more accurately than previously available flood maps. The resulting flood maps are now used by the operations centre of a local flood early warning system for warnings and evacuation alerts. Furthermore these maps can serve as a basis to identify flood hazard areas for spatial land use planning purposes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we propose a framework for both gradient descent image and object alignment in the Fourier domain. Our method centers upon the classical Lucas & Kanade (LK) algorithm where we represent the source and template/model in the complex 2D Fourier domain rather than in the spatial 2D domain. We refer to our approach as the Fourier LK (FLK) algorithm. The FLK formulation is advantageous when one pre-processes the source image and template/model with a bank of filters (e.g. oriented edges, Gabor, etc.) as: (i) it can handle substantial illumination variations, (ii) the inefficient pre-processing filter bank step can be subsumed within the FLK algorithm as a sparse diagonal weighting matrix, (iii) unlike traditional LK the computational cost is invariant to the number of filters and as a result far more efficient, and (iv) this approach can be extended to the inverse compositional form of the LK algorithm where nearly all steps (including Fourier transform and filter bank pre-processing) can be pre-computed leading to an extremely efficient and robust approach to gradient descent image matching. Further, these computational savings translate to non-rigid object alignment tasks that are considered extensions of the LK algorithm such as those found in Active Appearance Models (AAMs).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The design and construction community has shown increasing interest in adopting building information models (BIMs). The richness of information provided by BIMs has the potential to streamline the design and construction processes by enabling enhanced communication, coordination, automation and analysis. However, there are many challenges in extracting construction-specific information out of BIMs. In most cases, construction practitioners have to manually identify the required information, which is inefficient and prone to error, particularly for complex, large-scale projects. This paper describes the process and methods we have formalized to partially automate the extraction and querying of construction-specific information from a BIM. We describe methods for analyzing a BIM to query for spatial information that is relevant for construction practitioners, and that is typically represented implicitly in a BIM. Our approach integrates ifcXML data and other spatial data to develop a richer model for construction users. We employ custom 2D topological XQuery predicates to answer a variety of spatial queries. The validation results demonstrate that this approach provides a richer representation of construction-specific information compared to existing BIM tools.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the modern connected world, pervasive computing has become reality. Thanks to the ubiquity of mobile computing devices and emerging cloud-based services, the users permanently stay connected to their data. This introduces a slew of new security challenges, including the problem of multi-device key management and single-sign-on architectures. One solution to this problem is the utilization of secure side-channels for authentication, including the visual channel as vicinity proof. However, existing approaches often assume confidentiality of the visual channel, or provide only insufficient means of mitigating a man-in-the-middle attack. In this work, we introduce QR-Auth, a two-step, 2D barcode based authentication scheme for mobile devices which aims specifically at key management and key sharing across devices in a pervasive environment. It requires minimal user interaction and therefore provides better usability than most existing schemes, without compromising its security. We show how our approach fits in existing authorization delegation and one-time-password generation schemes, and that it is resilient to man-in-the-middle attacks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Graphene, one of the allotropes (diamond, carbon nanotube, and fullerene) of element carbon, is a monolayer of honeycomb lattice of carbon atoms, which was discovered in 2004. The Nobel Prize in Physics 2010 was awarded to Andre Geim and Konstantin Novoselov for their ground breaking work on the two-dimensional (2D) graphene [1]. Since its discovery, the research communities have shown a lot of interest in this novel material owing to its intriguing electrical, mechanical and thermal properties. It has been confirmed that grapheme possesses very peculiar electrical properties such as anomalous quantum hall effect, and high electron mobility at room temperature (250000 cm2/Vs). Graphene also has exceptional mechanical properties. It is one of the stiffest (modulus ~1 TPa) and strongest (strength ~100 GPa) materials. In addition, it has exceptional thermal conductivity (5000 Wm-1K-1). Due to these exceptional properties, graphene has demonstrated its potential for broad applications in micro and nano devices, various sensors, electrodes, solar cells and energy storage devices and nanocomposites. In particular, the excellent mechanical properties of graphene make it more attractive for development next generation nanocomposites and hybrid materials...

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper considers the problem of reconstructing the motion of a 3D articulated tree from 2D point correspondences subject to some temporal prior. Hitherto, smooth motion has been encouraged using a trajectory basis, yielding a hard combinatorial problem with time complexity growing exponentially in the number of frames. Branch and bound strategies have previously attempted to curb this complexity whilst maintaining global optimality. However, they provide no guarantee of being more efficient than exhaustive search. Inspired by recent work which reconstructs general trajectories using compact high-pass filters, we develop a dynamic programming approach which scales linearly in the number of frames, leveraging the intrinsically local nature of filter interactions. Extension to affine projection enables reconstruction without estimating cameras.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we construct earthwork allocation plans for a linear infrastructure road project. Fuel consumption metrics and an innovative block partitioning and modelling approach are applied to reduce costs. 2D and 3D variants of the problem were compared to see what effect, if any, occurs on solution quality. 3D variants were also considered to see what additional complexities and difficulties occur. The numerical investigation shows a significant improvement and a reduction in fuel consumption as theorised. The proposed solutions differ considerably from plans that were constructed for a distance based metric as commonly used in other approaches. Under certain conditions, 3D problem instances can be solved optimally as 2D problems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

ABSTRACT Objective: Ureaplasma parvum colonization in the setting of polymicrobial flora is common in women with chorioamnionitis, and is a risk factor for preterm delivery and neonatal morbidity. We hypothesized that ureaplasma colonization of amniotic fluid will modulate chorioamnionitis induced by E.coli lipopolysaccharide (LPS). Methods: Sheep received intra-amniotic (IA) injections of media (control) or live ureaplasma either 7 or 70d before delivery. Another group received IA LPS 2d before delivery. To test for interactions, U.parvum exposed animals were challenged with IA LPS, and delivered 2d later. All animals were delivered preterm at 125±1 day gestation. Results: Both IA ureaplasmas and LPS induced leukocyte infiltration of chorioamnion. LPS greatly increased the expression of pro-inflammatory cytokines and myeloperoxidase in leukocytes, while ureaplasmas alone caused modest responses. Interestingly, 7d but not 70d ureaplasma exposure significantly downregulated LPS induced pro-inflammatory cytokines and myeloperoxidase expression in the chorioamnion. Conclusion: U.parvum can suppress LPS induced experimental chorioamnionitis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Management of groundwater systems requires realistic conceptual hydrogeological models as a framework for numerical simulation modelling, but also for system understanding and communicating this to stakeholders and the broader community. To help overcome these challenges we developed GVS (Groundwater Visualisation System), a stand-alone desktop software package that uses interactive 3D visualisation and animation techniques. The goal was a user-friendly groundwater management tool that could support a range of existing real-world and pre-processed data, both surface and subsurface, including geology and various types of temporal hydrological information. GVS allows these data to be integrated into a single conceptual hydrogeological model. In addition, 3D geological models produced externally using other software packages, can readily be imported into GVS models, as can outputs of simulations (e.g. piezometric surfaces) produced by software such as MODFLOW or FEFLOW. Boreholes can be integrated, showing any down-hole data and properties, including screen information, intersected geology, water level data and water chemistry. Animation is used to display spatial and temporal changes, with time-series data such as rainfall, standing water levels and electrical conductivity, displaying dynamic processes. Time and space variations can be presented using a range of contouring and colour mapping techniques, in addition to interactive plots of time-series parameters. Other types of data, for example, demographics and cultural information, can also be readily incorporated. The GVS software can execute on a standard Windows or Linux-based PC with a minimum of 2 GB RAM, and the model output is easy and inexpensive to distribute, by download or via USB/DVD/CD. Example models are described here for three groundwater systems in Queensland, northeastern Australia: two unconfined alluvial groundwater systems with intensive irrigation, the Lockyer Valley and the upper Condamine Valley, and the Surat Basin, a large sedimentary basin of confined artesian aquifers. This latter example required more detail in the hydrostratigraphy, correlation of formations with drillholes and visualisation of simulation piezometric surfaces. Both alluvial system GVS models were developed during drought conditions to support government strategies to implement groundwater management. The Surat Basin model was industry sponsored research, for coal seam gas groundwater management and community information and consultation. The “virtual” groundwater systems in these 3D GVS models can be interactively interrogated by standard functions, plus production of 2D cross-sections, data selection from the 3D scene, rear end database and plot displays. A unique feature is that GVS allows investigation of time-series data across different display modes, both 2D and 3D. GVS has been used successfully as a tool to enhance community/stakeholder understanding and knowledge of groundwater systems and is of value for training and educational purposes. Projects completed confirm that GVS provides a powerful support to management and decision making, and as a tool for interpretation of groundwater system hydrological processes. A highly effective visualisation output is the production of short videos (e.g. 2–5 min) based on sequences of camera ‘fly-throughs’ and screen images. Further work involves developing support for multi-screen displays and touch-screen technologies, distributed rendering, gestural interaction systems. To highlight the visualisation and animation capability of the GVS software, links to related multimedia hosted online sites are included in the references.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Molecular modelling has become a useful and widely applied tool to investigate separation and diffusion behavior of gas molecules through nano-porous low dimensional carbon materials, including quasi-1D carbon nanotubes and 2D graphene-like carbon allotropes. These simulations provide detailed, molecular level information about the carbon framework structure as well as dynamics and mechanistic insights, i.e. size sieving, quantum sieving, and chemical affinity sieving. In this perspective, we revisit recent advances in this field and summarize separation mechanisms for multicomponent systems from kinetic and equilibrium molecular simulations, elucidating also anomalous diffusion effects induced by the confining pore structure and outlining perspectives for future directions in this field.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Measurement accuracy is critical for biomechanical gait assessment. Very few studies have determined the accuracy of common clinical rearfoot variables between cameras with different collection frequencies. Research question: What is the measurement error for common rearfoot gait parameters when using a standard 30Hz digital camera compared to 100Hz camera? Type of study: Descriptive. Methods: 100 footfalls were recorded from 10 subjects ( 10 footfalls per subject) running on a treadmill at 2.68m/s. A high-speed digital timer, accurate within 1ms served as an external reference. Markers were placed along the vertical axis of the heel counter and the long axis of the shank. 2D coordinates for the four markers were determined from heel strike to heel lift. Variables of interest included time of heel strike (THS), time of heel lift (THL), time to maximum eversion (TMax), and maximum rearfoot eversion angle (EvMax). Results: THS difference was 29.77ms (+/- 8.77), THL difference was 35.64ms (+/- 6.85), and TMax difference was 16.50ms (+/- 2.54). These temporal values represent a difference equal to 11.9%, 14.3%, and 6.6% of the stance phase of running gait, respectively. EvMax difference was 1.02 degrees (+/- 0.46). Conclusions: A 30Hz camera is accurate, compared to a high-frequency camera, in determining TMax and EvMax during a clinical gait analysis. However, relatively large differences, in excess of 12% of the stance phase of gait, for THS and THL variables were measured.