212 resultados para 2D nanopatterning


Relevância:

10.00% 10.00%

Publicador:

Resumo:

A number of groups around the world are working in the field of three dimensional(3D) ultrasound (US) in order to obtain higher quality diagnostic information. 3D US, in general, involves collecting a sequence of conventional 2D US images along with information on the position and orientation of each image plane. A transformation matrix is calculated relating image space to real world space. This allows image pixels and region of interest (ROI) points drawn on the image to be displayed in 3D. The 3D data can be used for the production of volume or surface rendered images, or for the direct calculation of ROI volumes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The representation of business process models has been a continuing research topic for many years now. However, many process model representations have not developed beyond minimally interactive 2D icon-based representations of directed graphs and networks, with little or no annotation for information over- lays. With the rise of desktop computers and commodity mobile devices capable of supporting rich interactive 3D environments, we believe that much of the research performed in computer human interaction, virtual reality, games and interactive entertainment has much potential in areas of BPM; to engage, pro- vide insight, and to promote collaboration amongst analysts and stakeholders alike. This initial visualization workshop seeks to initiate the development of a high quality international forum to present and discuss research in this field. Via this workshop, we intend to create a community to unify and nurture the development of process visualization topics as a continuing research area.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The most common software analysis tools available for measuring fluorescence images are for two-dimensional (2D) data that rely on manual settings for inclusion and exclusion of data points, and computer-aided pattern recognition to support the interpretation and findings of the analysis. It has become increasingly important to be able to measure fluorescence images constructed from three-dimensional (3D) datasets in order to be able to capture the complexity of cellular dynamics and understand the basis of cellular plasticity within biological systems. Sophisticated microscopy instruments have permitted the visualization of 3D fluorescence images through the acquisition of multispectral fluorescence images and powerful analytical software that reconstructs the images from confocal stacks that then provide a 3D representation of the collected 2D images. Advanced design-based stereology methods have progressed from the approximation and assumptions of the original model-based stereology(1) even in complex tissue sections(2). Despite these scientific advances in microscopy, a need remains for an automated analytic method that fully exploits the intrinsic 3D data to allow for the analysis and quantification of the complex changes in cell morphology, protein localization and receptor trafficking. Current techniques available to quantify fluorescence images include Meta-Morph (Molecular Devices, Sunnyvale, CA) and Image J (NIH) which provide manual analysis. Imaris (Andor Technology, Belfast, Northern Ireland) software provides the feature MeasurementPro, which allows the manual creation of measurement points that can be placed in a volume image or drawn on a series of 2D slices to create a 3D object. This method is useful for single-click point measurements to measure a line distance between two objects or to create a polygon that encloses a region of interest, but it is difficult to apply to complex cellular network structures. Filament Tracer (Andor) allows automatic detection of the 3D neuronal filament-like however, this module has been developed to measure defined structures such as neurons, which are comprised of dendrites, axons and spines (tree-like structure). This module has been ingeniously utilized to make morphological measurements to non-neuronal cells(3), however, the output data provide information of an extended cellular network by using a software that depends on a defined cell shape rather than being an amorphous-shaped cellular model. To overcome the issue of analyzing amorphous-shaped cells and making the software more suitable to a biological application, Imaris developed Imaris Cell. This was a scientific project with the Eidgenössische Technische Hochschule, which has been developed to calculate the relationship between cells and organelles. While the software enables the detection of biological constraints, by forcing one nucleus per cell and using cell membranes to segment cells, it cannot be utilized to analyze fluorescence data that are not continuous because ideally it builds cell surface without void spaces. To our knowledge, at present no user-modifiable automated approach that provides morphometric information from 3D fluorescence images has been developed that achieves cellular spatial information of an undefined shape (Figure 1). We have developed an analytical platform using the Imaris core software module and Imaris XT interfaced to MATLAB (Mat Works, Inc.). These tools allow the 3D measurement of cells without a pre-defined shape and with inconsistent fluorescence network components. Furthermore, this method will allow researchers who have extended expertise in biological systems, but not familiarity to computer applications, to perform quantification of morphological changes in cell dynamics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Biophysical and biochemical properties of the microenvironment regulate cellular responses such as growth, differentiation, morphogenesis and migration in normal and cancer cells. Since two-dimensional (2D) cultures lack the essential characteristics of the native cellular microenvironment, three-dimensional (3D) cultures have been developed to better mimic the natural extracellular matrix. To date, 3D culture systems have relied mostly on collagen and Matrigel™ hydrogels, allowing only limited control over matrix stiffness, proteolytic degradability, and ligand density. In contrast, bioengineered hydrogels allow us to independently tune and systematically investigate the influence of these parameters on cell growth and differentiation. In this study, polyethylene glycol (PEG) hydrogels, functionalized with the Arginine-glycine-aspartic acid (RGD) motifs, common cell-binding motifs in extracellular matrix proteins, and matrix metalloproteinase (MMP) cleavage sites, were characterized regarding their stiffness, diffusive properties, and ability to support growth of androgen-dependent LNCaP prostate cancer cells. We found that the mechanical properties modulated the growth kinetics of LNCaP cells in the PEG hydrogel. At culture periods of 28 days, LNCaP cells underwent morphogenic changes, forming tumor-like structures in 3D culture, with hypoxic and apoptotic cores. We further compared protein and gene expression levels between 3D and 2D cultures upon stimulation with the synthetic androgen R1881. Interestingly, the kinetics of R1881 stimulated androgen receptor (AR) nuclear translocation differed between 2D and 3D cultures when observed by immunofluorescent staining. Furthermore, microarray studies revealed that changes in expression levels of androgen responsive genes upon R1881 treatment differed greatly between 2D and 3D cultures. Taken together, culturing LNCaP cells in the tunable PEG hydrogels reveals differences in the cellular responses to androgen stimulation between the 2D and 3D environments. Therefore, we suggest that the presented 3D culture system represents a powerful tool for high throughput prostate cancer drug testing that recapitulates tumor microenvironment. © 2012 Sieh et al.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Tenofovir has been associated with renal phosphate wasting, reduced bone mineral density, and higher parathyroid hormone levels. The aim of this study was to carry out a detailed comparison of the effects of tenofovir versus non-tenofovir use on calcium, phosphate and, vitamin D, parathyroid hormone (PTH), and bone mineral density. Methods: A cohort study of 56 HIV-1 infected adults at a single centre in the UK on stable antiretroviral regimes comparing biochemical and bone mineral density parameters between patients receiving either tenofovir or another nucleoside reverse transcriptase inhibitor. Principal Findings: In the unadjusted analysis, there was no significant difference between the two groups in PTH levels (tenofovir mean 5.9 pmol/L, 95% confidence intervals 5.0 to 6.8, versus non-tenofovir; 5.9, 4.9 to 6.9; p = 0.98). Patients on tenofovir had significantly reduced urinary calcium excretion (median 3.01 mmol/24 hours) compared to non-tenofovir users (4.56; p,0.0001). Stratification of the analysis by age and ethnicity revealed that non-white men but not women, on tenofovir had higher PTH levels than non-white men not on tenofovir (mean difference 3.1 pmol/L, 95% CI 5.3 to 0.9; p = 0.007). Those patients with optimal 25-hydroxyvitamin D (.75 nmol/L) on tenofovir had higher 1,25-dihydroxyvitamin D [1,25(OH)2D] (median 48 pg/mL versus 31; p = 0.012), fractional excretion of phosphate (median 26.1%, versus 14.6;p = 0.025) and lower serum phosphate (median 0.79 mmol/L versus 1.02; p = 0.040) than those not taking tenofovir. Conclusions: The effects of tenofovir on PTH levels were modified by sex and ethnicity in this cohort. Vitamin D status also modified the effects of tenofovir on serum concentrations of 1,25(OH)2D and phosphate.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Approximate clone detection is the process of identifying similar process fragments in business process model collections. The tool presented in this paper can efficiently cluster approximate clones in large process model repositories. Once a repository is clustered, users can filter and browse the clusters using different filtering parameters. Our tool can also visualize clusters in the 2D space, allowing a better understanding of clusters and their member fragments. This demonstration will be useful for researchers and practitioners working on large process model repositories, where process standardization is a critical task for increasing the consistency and reducing the complexity of the repository.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Prostate cancer (CaP) is the second leading cause of cancer-related deaths in North American males and the most common newly diagnosed cancer in men world wide. Biomarkers are widely used for both early detection and prognostic tests for cancer. The current, commonly used biomarker for CaP is serum prostate specific antigen (PSA). However, the specificity of this biomarker is low as its serum level is not only increased in CaP but also in various other diseases, with age and even body mass index. Human body fluids provide an excellent resource for the discovery of biomarkers, with the advantage over tissue/biopsy samples of their ease of access, due to the less invasive nature of collection. However, their analysis presents challenges in terms of variability and validation. Blood and urine are two human body fluids commonly used for CaP research, but their proteomic analyses are limited both by the large dynamic range of protein abundance making detection of low abundance proteins difficult and in the case of urine, by the high salt concentration. To overcome these challenges, different techniques for removal of high abundance proteins and enrichment of low abundance proteins are used. Their applications and limitations are discussed in this review. A number of innovative proteomic techniques have improved detection of biomarkers. They include two dimensional differential gel electrophoresis (2D-DIGE), quantitative mass spectrometry (MS) and functional proteomic studies, i.e., investigating the association of post translational modifications (PTMs) such as phosphorylation, glycosylation and protein degradation. The recent development of quantitative MS techniques such as stable isotope labeling with amino acids in cell culture (SILAC), isobaric tags for relative and absolute quantitation (iTRAQ) and multiple reaction monitoring (MRM) have allowed proteomic researchers to quantitatively compare data from different samples. 2D-DIGE has greatly improved the statistical power of classical 2D gel analysis by introducing an internal control. This chapter aims to review novel CaP biomarkers as well as to discuss current trends in biomarker research from two angles: the source of biomarkers (particularly human body fluids such as blood and urine), and emerging proteomic approaches for biomarker research.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Typical flow fields in a stormwater gross pollutant trap (GPT) with blocked retaining screens were experimentally captured and visualised. Particle image velocimetry (PIV) software was used to capture the flow field data by tracking neutrally buoyant particles with a high speed camera. A technique was developed to apply the Image Based Flow Visualization (IBFV) algorithm to the experimental raw dataset generated by the PIV software. The dataset consisted of scattered 2D point velocity vectors and the IBFV visualisation facilitates flow feature characterisation within the GPT. The flow features played a pivotal role in understanding gross pollutant capture and retention within the GPT. It was found that the IBFV animations revealed otherwise unnoticed flow features and experimental artefacts. For example, a circular tracer marker in the IBFV program visually highlighted streamlines to investigate specific areas and identify the flow features within the GPT.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hematopoietic stem cell (HSC) transplant is a well established curative therapy for some hematological malignancies. However, achieving adequate supply of HSC from some donor tissues can limit both its application and ultimate efficacy. The theory that this limitation could be overcome by expanding the HSC population before transplantation has motivated numerous laboratories to develop ex vivo expansion processes. Pioneering work in this field utilized stromal cells as support cells in cocultures with HSC to mimic the HSC niche. We hypothesized that through translation of this classic coculture system to a three-dimensional (3D) structure we could better replicate the niche environment and in turn enhance HSC expansion. Herein we describe a novel high-throughput 3D coculture system where murine-derived HSC can be cocultured with mesenchymal stem/stromal cells (MSC) in 3D microaggregates—which we term “micromarrows.” Micromarrows were formed using surface modified microwells and their ability to support HSC expansion was compared to classic two-dimensional (2D) cocultures. While both 2D and 3D systems provide only a modest total cell expansion in the minimally supplemented medium, the micromarrow system supported the expansion of approximately twice as many HSC candidates as the 2D controls. Histology revealed that at day 7, the majority of bound hematopoietic cells reside in the outer layers of the aggregate. Quantitative polymerase chain reaction demonstrates that MSC maintained in 3D aggregates express significantly higher levels of key hematopoietic niche factors relative to their 2D equivalents. Thus, we propose that the micromarrow platform represents a promising first step toward a high-throughput HSC 3D coculture system that may enable in vitro HSC niche recapitulation and subsequent extensive in vitro HSC self-renewal.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The rapid increase in the deployment of CCTV systems has led to a greater demand for algorithms that are able to process incoming video feeds. These algorithms are designed to extract information of interest for human operators. During the past several years, there has been a large effort to detect abnormal activities through computer vision techniques. Typically, the problem is formulated as a novelty detection task where the system is trained on normal data and is required to detect events which do not fit the learned `normal' model. Many researchers have tried various sets of features to train different learning models to detect abnormal behaviour in video footage. In this work we propose using a Semi-2D Hidden Markov Model (HMM) to model the normal activities of people. The outliers of the model with insufficient likelihood are identified as abnormal activities. Our Semi-2D HMM is designed to model both the temporal and spatial causalities of the crowd behaviour by assuming the current state of the Hidden Markov Model depends not only on the previous state in the temporal direction, but also on the previous states of the adjacent spatial locations. Two different HMMs are trained to model both the vertical and horizontal spatial causal information. Location features, flow features and optical flow textures are used as the features for the model. The proposed approach is evaluated using the publicly available UCSD datasets and we demonstrate improved performance compared to other state of the art methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Flood related scientific and community-based data are rarely systematically collected and analysed in the Philippines. Over the last decades the Pagsangaan River Basin, Leyte, has experienced several flood events. However, documentation describing flood characteristics such as extent, duration or height of these floods are close to non-existing. To address this issue, computerized flood modelling was used to reproduce past events where there was data available for at least partial calibration and validation. The model was also used to provide scenario-based predictions based on A1B climate change assumptions for the area. The most important input for flood modelling is a Digital Elevation Model (DEM) of the river basin. No accurate topographic maps or Light Detection And Ranging (LIDAR)-generated data are available for the Pagsangaan River. Therefore, the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) Global Digital Elevation Map (GDEM), Version 1, was chosen as the DEM. Although the horizontal spatial resolution of 30 m is rather desirable, it contains substantial vertical errors. These were identified, different correction methods were tested and the resulting DEM was used for flood modelling. The above mentioned data were combined with cross-sections at various strategic locations of the river network, meteorological records, river water level, and current velocity to develop the 1D-2D flood model. SOBEK was used as modelling software to create different rainfall scenarios, including historic flooding events. Due to the lack of scientific data for the verification of the model quality, interviews with local stakeholders served as the gauge to judge the quality of the generated flood maps. According to interviewees, the model reflects reality more accurately than previously available flood maps. The resulting flood maps are now used by the operations centre of a local flood early warning system for warnings and evacuation alerts. Furthermore these maps can serve as a basis to identify flood hazard areas for spatial land use planning purposes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we propose a framework for both gradient descent image and object alignment in the Fourier domain. Our method centers upon the classical Lucas & Kanade (LK) algorithm where we represent the source and template/model in the complex 2D Fourier domain rather than in the spatial 2D domain. We refer to our approach as the Fourier LK (FLK) algorithm. The FLK formulation is advantageous when one pre-processes the source image and template/model with a bank of filters (e.g. oriented edges, Gabor, etc.) as: (i) it can handle substantial illumination variations, (ii) the inefficient pre-processing filter bank step can be subsumed within the FLK algorithm as a sparse diagonal weighting matrix, (iii) unlike traditional LK the computational cost is invariant to the number of filters and as a result far more efficient, and (iv) this approach can be extended to the inverse compositional form of the LK algorithm where nearly all steps (including Fourier transform and filter bank pre-processing) can be pre-computed leading to an extremely efficient and robust approach to gradient descent image matching. Further, these computational savings translate to non-rigid object alignment tasks that are considered extensions of the LK algorithm such as those found in Active Appearance Models (AAMs).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The design and construction community has shown increasing interest in adopting building information models (BIMs). The richness of information provided by BIMs has the potential to streamline the design and construction processes by enabling enhanced communication, coordination, automation and analysis. However, there are many challenges in extracting construction-specific information out of BIMs. In most cases, construction practitioners have to manually identify the required information, which is inefficient and prone to error, particularly for complex, large-scale projects. This paper describes the process and methods we have formalized to partially automate the extraction and querying of construction-specific information from a BIM. We describe methods for analyzing a BIM to query for spatial information that is relevant for construction practitioners, and that is typically represented implicitly in a BIM. Our approach integrates ifcXML data and other spatial data to develop a richer model for construction users. We employ custom 2D topological XQuery predicates to answer a variety of spatial queries. The validation results demonstrate that this approach provides a richer representation of construction-specific information compared to existing BIM tools.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the modern connected world, pervasive computing has become reality. Thanks to the ubiquity of mobile computing devices and emerging cloud-based services, the users permanently stay connected to their data. This introduces a slew of new security challenges, including the problem of multi-device key management and single-sign-on architectures. One solution to this problem is the utilization of secure side-channels for authentication, including the visual channel as vicinity proof. However, existing approaches often assume confidentiality of the visual channel, or provide only insufficient means of mitigating a man-in-the-middle attack. In this work, we introduce QR-Auth, a two-step, 2D barcode based authentication scheme for mobile devices which aims specifically at key management and key sharing across devices in a pervasive environment. It requires minimal user interaction and therefore provides better usability than most existing schemes, without compromising its security. We show how our approach fits in existing authorization delegation and one-time-password generation schemes, and that it is resilient to man-in-the-middle attacks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Graphene, one of the allotropes (diamond, carbon nanotube, and fullerene) of element carbon, is a monolayer of honeycomb lattice of carbon atoms, which was discovered in 2004. The Nobel Prize in Physics 2010 was awarded to Andre Geim and Konstantin Novoselov for their ground breaking work on the two-dimensional (2D) graphene [1]. Since its discovery, the research communities have shown a lot of interest in this novel material owing to its intriguing electrical, mechanical and thermal properties. It has been confirmed that grapheme possesses very peculiar electrical properties such as anomalous quantum hall effect, and high electron mobility at room temperature (250000 cm2/Vs). Graphene also has exceptional mechanical properties. It is one of the stiffest (modulus ~1 TPa) and strongest (strength ~100 GPa) materials. In addition, it has exceptional thermal conductivity (5000 Wm-1K-1). Due to these exceptional properties, graphene has demonstrated its potential for broad applications in micro and nano devices, various sensors, electrodes, solar cells and energy storage devices and nanocomposites. In particular, the excellent mechanical properties of graphene make it more attractive for development next generation nanocomposites and hybrid materials...