250 resultados para ricostruzione 3D triangolazione laser computervision


Relevância:

20.00% 20.00%

Publicador:

Resumo:

3D in vitro model systems that are able to mimic the in vivo microenvironment are now highly sought after in cancer research. Antheraea mylitta silk fibroin protein matrices were investigated as potential biomaterial for in vitro tumor modeling. We compared the characteristics of MDA-MB-231 cells on A. mylitta, Bombyx mori silk matrices, Matrigel, and tissue culture plates. The attachment and morphology of the MDA-MB-231 cell line on A. mylitta silk matrices was found to be better than on B. mori matrices and comparable to Matrigel and tissue culture plates. The cells grown in all 3D cultures showed more MMP-9 activity, indicating a more invasive potential. In comparison to B. mori fibroin, A. mylitta fibroin not only provided better cell adhesion, but also improved cell viability and proliferation. Yield coefficient of glucose consumed to lactate produced by cells on 3D A. mylitta fibroin was found to be similar to that of cancer cells in vivo. LNCaP prostate cancer cells were also cultured on 3D A. mylitta fibroin and they grew as clumps in long term culture. The results indicate that A. mylitta fibroin scaffold can provide an easily manipulated microenvironment system to investigate individual factors such as growth factors and signaling peptides, as well as evaluation of anticancer drugs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Road surface macro-texture is an indicator used to determine the skid resistance levels in pavements. Existing methods of quantifying macro-texture include the sand patch test and the laser profilometer. These methods utilise the 3D information of the pavement surface to extract the average texture depth. Recently, interest in image processing techniques as a quantifier of macro-texture has arisen, mainly using the Fast Fourier Transform (FFT). This paper reviews the FFT method, and then proposes two new methods, one using the autocorrelation function and the other using wavelets. The methods are tested on pictures obtained from a pavement surface extending more than 2km's. About 200 images were acquired from the surface at approx. 10m intervals from a height 80cm above ground. The results obtained from image analysis methods using the FFT, the autocorrelation function and wavelets are compared with sensor measured texture depth (SMTD) data obtained from the same paved surface. The results indicate that coefficients of determination (R2) exceeding 0.8 are obtained when up to 10% of outliers are removed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Eigen-based techniques and other monolithic approaches to face recognition have long been a cornerstone in the face recognition community due to the high dimensionality of face images. Eigen-face techniques provide minimal reconstruction error and limit high-frequency content while linear discriminant-based techniques (fisher-faces) allow the construction of subspaces which preserve discriminatory information. This paper presents a frequency decomposition approach for improved face recognition performance utilising three well-known techniques: Wavelets; Gabor / Log-Gabor; and the Discrete Cosine Transform. Experimentation illustrates that frequency domain partitioning prior to dimensionality reduction increases the information available for classification and greatly increases face recognition performance for both eigen-face and fisher-face approaches.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The field of literacy studies has always been challenged by the changing technologies that humans have used to express, represent and communicate their feelings, ideas, understandings and knowledge. However, while the written word has remained central to literacy processes over a long period, it is generally accepted that there have been significant changes to what constitutes ‘literate’ practice. In particular, the status of the printed word has been challenged by the increasing dominance of the image, along with the multimodal meaning-making systems facilitated by digital media. For example, Gunther Kress and other members of the New London Group have argued that the second half of the twentieth century saw a significant cultural shift from the linguistic to the visual as the dominant semiotic mode. This in turn, they suggest, was accompanied by a cultural shift ‘from page to screen’ as a dominant space of representation (e.g. Cope & Kalantzis, 2000; Kress, 2003; New London Group, 1996). In a similar vein, Bill Green has noted that we have witnessed a shift from the regime of the print apparatus to a regime of the digital electronic apparatus (Lankshear, Snyder and Green, 2000). For these reasons, the field of literacy education has been challenged to find new ways to conceptualise what is meant by ‘literacy’ in the twenty first century and to rethink the conditions under which children might best be taught to be fully literate so that they can operate with agency in today’s world.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a preliminary crash avoidance framework for heavy equipment control systems. Safe equipment operation is a major concern on construction sites since fatal on-site injuries are an industry-wide problem. The proposed framework has potential for effecting active safety for equipment operation. The framework contains algorithms for spatial modeling, object tracking, and path planning. Beyond generating spatial models in fractions of seconds, these algorithms can successfully track objects in an environment and produce a collision-free 3D motion trajectory for equipment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

On obstacle-cluttered construction sites, understanding the motion characteristics of objects is important for anticipating collisions and preventing accidents. This study investigates algorithms for object identification applications that can be used by heavy equipment operators to effectively monitor congested local environment. The proposed framework contains algorithms for three-dimensional spatial modeling and image matching that are based on 3D images scanned by a high-frame rate range sensor. The preliminary results show that an occupancy grid spatial modeling algorithm can successfully build the most pertinent spatial information, and that an image matching algorithm is best able to identify which objects are in the scanned scene.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective Laser Doppler imaging (LDI) was compared to wound outcomes in children's burns, to determine if the technology could be used to predict these outcomes. Methods Forty-eight patients with a total of 85 burns were included in the study. Patient median age was 4 years 10 months and scans were taken 0–186 h post-burn using the fast, low-resolution setting on the Moor LDI2 laser Doppler imager. Wounds were managed by standard practice, without taking into account the scan results. Time until complete re-epithelialisation and whether or not grafting and scar management were required were recorded for each wound. If wounds were treated with Silvazine™ or Acticoat™ prior to the scan, this was also recorded. Results The predominant colour of the scan was found to be significantly related to the re-epithelialisation, grafting and scar management outcomes and could be used to predict those outcomes. The prior use of Acticoat™ did not affect the scan relationship to outcomes, however, the use of Silvazine™ did complicate the relationship for light blue and green scanned partial thickness wounds. Scans taken within the 24-h window after-burn also appeared to be accurate predictors of wound outcome. Conclusion Laser Doppler imaging is accurate and effective in a paediatric population with a low-resolution fast-scan.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

While recent research has provided valuable information as to the composition of laser printer particles, their formation mechanisms, and explained why some printers are emitters whilst others are low emitters, fundamental questions relating to the potential exposure of office workers remained unanswered. In particular, (i) what impact does the operation of laser printers have on the background particle number concentration (PNC) of an office environment over the duration of a typical working day?; (ii) what is the airborne particle exposure to office workers in the vicinity of laser printers; (iii) what influence does the office ventilation have upon the transport and concentration of particles?; (iv) is there a need to control the generation of, and/or transport of particles arising from the operation of laser printers within an office environment?; (v) what instrumentation and methodology is relevant for characterising such particles within an office location? We present experimental evidence on printer temporal and spatial PNC during the operation of 107 laser printers within open plan offices of five buildings. We show for the first time that the eight-hour time-weighted average printer particle exposure is significantly less than the eight-hour time-weighted local background particle exposure, but that peak printer particle exposure can be greater than two orders of magnitude higher than local background particle exposure. The particle size range is predominantly ultrafine (< 100nm diameter). In addition we have established that office workers are constantly exposed to non-printer derived particle concentrations, with up to an order of magnitude difference in such exposure amongst offices, and propose that such exposure be controlled along with exposure to printer derived particles. We also propose, for the first time, that peak particle reference values be calculated for each office area analogous to the criteria used in Australia and elsewhere for evaluating exposure excursion above occupational hazardous chemical exposure standards. A universal peak particle reference value of 2.0 x 104 particles cm-3 has been proposed.