877 resultados para High-performance computing hyperspectral imaging


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Matrix sublimation has demonstrated to be a powerful approach for high-resolution matrix-assisted laser desorption ionization (MALDI) imaging of lipids, providing very homogeneous solvent-free deposition. This work presents a comprehensive study aiming to evaluate current and novel matrix candidates for high spatial resolution MALDI imaging mass spectrometry of lipids from tissue section after deposition by sublimation. For this purpose, 12 matrices including 2,5-dihydroxybenzoic acid (DHB), sinapinic acid (SA), α-cyano-4-hydroxycinnamic acid (CHCA), 2,6-dihydroxyacetphenone (DHA), 2',4',6'-trihydroxyacetophenone (THAP), 3-hydroxypicolinic acid (3-HPA), 1,8-bis(dimethylamino)naphthalene (DMAN), 1,8,9-anthracentriol (DIT), 1,5-diaminonapthalene (DAN), p-nitroaniline (NIT), 9-aminoacridine (9-AA), and 2-mercaptobenzothiazole (MBT) were investigated for lipid detection efficiency in both positive and negative ionization modes, matrix interferences, and stability under vacuum. For the most relevant matrices, ion maps of the different lipid species were obtained from tissue sections at high spatial resolution and the detected peaks were characterized by matrix-assisted laser desorption ionization time-of-flight/time-of-flight (MALDI-TOF/TOF) mass spectrometry. First proposed for imaging mass spectrometry (IMS) after sublimation, DAN has demonstrated to be of high efficiency providing rich lipid signatures in both positive and negative polarities with high vacuum stability and sub-20 μm resolution capacity. Ion images from adult mouse brain were generated with a 10 μm scanning resolution. Furthermore, ion images from adult mouse brain and whole-body fish tissue sections were also acquired in both polarity modes from the same tissue section at 100 μm spatial resolution. Sublimation of DAN represents an interesting approach to improve information with respect to currently employed matrices providing a deeper analysis of the lipidome by IMS.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This review covers some of the contributions to date from cerebellar imaging studies performed at ultra-high magnetic fields. A short overview of the general advantages and drawbacks of the use of such high field systems for imaging is given. One of the biggest advantages of imaging at high magnetic fields is the improved spatial resolution, achievable thanks to the increased available signal-to-noise ratio. This high spatial resolution better matches the dimensions of the cerebellar substructures, allowing a better definition of such structures in the images. The implications of the use of high field systems is discussed for several imaging sequences and image contrast mechanisms. This review covers studies which were performed in vivo in both rodents and humans, with a special focus on studies that were directed towards the observation of the different cerebellar layers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Future experiments in nuclear and particle physics are moving towards the high luminosity regime in order to access rare processes. In this framework, particle detectors require high rate capability together with excellent timing resolution for precise event reconstruction. In order to achieve this, the development of dedicated FrontEnd Electronics (FEE) for detectors has become increasingly challenging and expensive. Thus, a current trend in R&D is towards flexible FEE that can be easily adapted to a great variety of detectors, without impairing the required high performance. This thesis reports on a novel FEE for two different detector types: imaging Cherenkov counters and plastic scintillator arrays. The former requires high sensitivity and precision for detection of single photon signals, while the latter is characterized by slower and larger signals typical of scintillation processes. The FEE design was developed using high-bandwidth preamplifiers and fast discriminators which provide Time-over-Threshold (ToT). The use of discriminators allowed for low power consumption, minimal dead-times and self-triggering capabilities, all fundamental aspects for high rate applications. The output signals of the FEE are readout by a high precision TDC system based on FPGA. The performed full characterization of the analogue signals under realistic conditions proved that the ToT information can be used in a novel way for charge measurements or walk corrections, thus improving the obtainable timing resolution. Detailed laboratory investigations proved the feasibility of the ToT method. The full readout chain was investigated in test experiments at the Mainz Microtron: high counting rates per channel of several MHz were achieved, and a timing resolution of better than 100 ps after walk correction based on ToT was obtained. Ongoing applications to fast Time-of-Flight counters and future developments of FEE have been also recently investigated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: The aim of this study was to investigate the biochemical properties, histological and immunohistochemical appearance, and magnetic resonance (MR) imaging findings of reparative cartilage after autologous chondrocyte implantation (ACI) for osteochondritis dissecans (OCD). METHODS: Six patients (mean age 20.2 +/- 8.8 years; 13-35 years) who underwent ACI for full-thickness cartilage defects of the femoral condyle were studied. One year after the procedure, a second-look arthroscopic operation was performed with biopsy of reparative tissue. The International Cartilage Repair Society (ICRS) visual histological assessment scale was used for histological assessment. Biopsied tissue was immunohistochemically analyzed with the use of monoclonal antihuman collagen type I and monoclonal antihuman collagen type II primary antibodies. Glycosaminoglycan (GAG) concentrations in biopsied reparative cartilage samples were measured by high performance liquid chromatography (HPLC). MR imaging was performed with T1- and T2-weighted imaging and three-dimensional spoiled gradient-recalled (3D-SPGR) MR imaging. RESULTS: Four tissue samples were graded as having a mixed morphology of hyaline and fibrocartilage while the other two were graded as fibrocartilage. Average ICRS scores for each criterion were (I) 1.0 +/- 1.5; (II) 1.7 +/- 0.5; (III) 0.6 +/- 1.0; (IV) 3.0 +/- 0.0; (V) 1.8 +/- 1.5; and (VI) 2.5 +/- 1.2. Average total score was 10.7 +/- 2.8. On immunohistochemical analysis, the matrix from deep and middle layers of reparative cartilage stained positive for type II collagen; however, the surface layer did not stain well. The average GAG concentration in reparative cartilage was 76.6 +/- 4.2 microg/mg whereas that in normal cartilage was 108 +/- 11.2 microg/mg. Common complications observed on 3D-SPGR MR imaging were hypertrophy of grafted periosteum, edema-like signal in bone marrow, and incomplete repair of subchondral bone at the surgical site. Clinically, patients had significant improvements in Lysholm scores. CONCLUSIONS: In spite of a good clinical course, reparative cartilage after ACI had less GAG concentration and was inferior to healthy hyaline cartilage in histological and immunohistochemical appearance and on MRI findings.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: The objective of this study was to evaluate the feasibility and reproducibility of high-resolution magnetic resonance imaging (MRI) and quantitative T2 mapping of the talocrural cartilage within a clinically applicable scan time using a new dedicated ankle coil and high-field MRI. MATERIALS AND METHODS: Ten healthy volunteers (mean age 32.4 years) underwent MRI of the ankle. As morphological sequences, proton density fat-suppressed turbo spin echo (PD-FS-TSE), as a reference, was compared with 3D true fast imaging with steady-state precession (TrueFISP). Furthermore, biochemical quantitative T2 imaging was prepared using a multi-echo spin-echo T2 approach. Data analysis was performed three times each by three different observers on sagittal slices, planned on the isotropic 3D-TrueFISP; as a morphological parameter, cartilage thickness was assessed and for T2 relaxation times, region-of-interest (ROI) evaluation was done. Reproducibility was determined as a coefficient of variation (CV) for each volunteer; averaged as root mean square (RMSA) given as a percentage; statistical evaluation was done using analysis of variance. RESULTS: Cartilage thickness of the talocrural joint showed significantly higher values for the 3D-TrueFISP (ranging from 1.07 to 1.14 mm) compared with the PD-FS-TSE (ranging from 0.74 to 0.99 mm); however, both morphological sequences showed comparable good results with RMSA of 7.1 to 8.5%. Regarding quantitative T2 mapping, measurements showed T2 relaxation times of about 54 ms with an excellent reproducibility (RMSA) ranging from 3.2 to 4.7%. CONCLUSION: In our study the assessment of cartilage thickness and T2 relaxation times could be performed with high reproducibility in a clinically realizable scan time, demonstrating new possibilities for further investigations into patient groups.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

INTRODUCTION: Cartilage defects are common pathologies and surgical cartilage repair shows promising results. In its postoperative evaluation, the magnetic resonance observation of cartilage repair tissue (MOCART) score, using different variables to describe the constitution of the cartilage repair tissue and the surrounding structures, is widely used. High-field magnetic resonance imaging (MRI) and 3-dimensional (3D) isotropic sequences may combine ideal preconditions to enhance the diagnostic performance of cartilage imaging.Aim of this study was to introduce an improved 3D MOCART score using the possibilities of an isotropic 3D true fast imaging with steady-state precession (True-FISP) sequence in the postoperative evaluation of patients after matrix-associated autologous chondrocyte transplantation (MACT) as well as to compare the results to the conventional 2D MOCART score using standard MR sequences. MATERIAL AND METHODS: The study had approval by the local ethics commission. One hundred consecutive MR scans in 60 patients at standard follow-up intervals of 1, 3, 6, 12, 24, and 60 months after MACT of the knee joint were prospectively included. The mean follow-up interval of this cross-sectional evaluation was 21.4 +/- 20.6 months; the mean age of the patients was 35.8 +/- 9.4 years. MRI was performed at a 3.0 Tesla unit. All variables of the standard 2D MOCART score where part of the new 3D MOCART score. Furthermore, additional variables and options were included with the aims to use the capabilities of isotropic MRI, to include the results of recent studies, and to adapt to the needs of patients and physician in a clinical routine examination. A proton-density turbo spin-echo sequence, a T2-weighted dual fast spin-echo (dual-FSE) sequence, and a T1-weighted turbo inversion recovery magnitude (TIRM) sequence were used to assess the standard 2D MOCART score; an isotropic 3D-TrueFISP sequence was prepared to evaluate the new 3D MOCART score. All 9 variables of the 2D MOCART score were compared with the corresponding variables obtained by the 3D MOCART score using the Pearson correlation coefficient; additionally the subjective quality and possible artifacts of the MR sequences were analyzed. RESULTS: The correlation between the standard 2D MOCART score and the new 3D MOCART showed for the 8 variables "defect fill," "cartilage interface," "surface," "adhesions," "structure," "signal intensity," "subchondral lamina," and "effusion"-a highly significant (P < 0.001) correlation with a Pearson coefficient between 0.566 and 0.932. The variable "bone marrow edema" correlated significantly (P < 0.05; Pearson coefficient: 0.257). The subjective quality of the 3 standard MR sequences was comparable to the isotropic 3D-TrueFISP sequence. Artifacts were more frequently visible within the 3D-TrueFISP sequence. CONCLUSION: In the clinical routine follow-up after cartilage repair, the 3D MOCART score, assessed by only 1 high-resolution isotropic MR sequence, provides comparable information than the standard 2D MOCART score. Hence, the new 3D MOCART score has the potential to combine the information of the standard 2D MOCART score with the possible advantages of isotropic 3D MRI at high-field. A clear limitation of the 3D-TrueFISP sequence was the high number of artifacts. Future studies have to prove the clinical benefits of a 3D MOCART score.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND AND PURPOSE: High-resolution, vascular MR imaging of the spine region in small animals poses several challenges. The small anatomic features, extravascular diffusion, and low signal-to-noise ratio limit the use of conventional contrast agents. We hypothesize that a long-circulating, intravascular liposomal-encapsulated MR contrast agent (liposomal-Gd) would facilitate visualization of small anatomic features of the perispinal vasculature not visible with conventional contrast agent (gadolinium-diethylene-triaminepentaacetic acid [Gd-DTPA]). METHODS: In this study, high-resolution MR angiography of the spine region was performed in a rat model using a liposomal-Gd, which is known to remain within the blood pool for an extended period. The imaging characteristics of this agent were compared with those of a conventional contrast agent, Gd-DTPA. RESULTS: The liposomal-Gd enabled acquisition of high quality angiograms with high signal-to-noise ratio. Several important vascular features, such as radicular arteries, posterior spinal vein, and epidural venous plexus were visualized in the angiograms obtained with the liposomal agent. The MR angiograms obtained with conventional Gd-DTPA did not demonstrate these vessels clearly because of marked extravascular soft-tissue enhancement that obscured the vasculature. CONCLUSIONS: This study demonstrates the potential benefit of long-circulating liposomal-Gd as a MR contrast agent for high-resolution vascular imaging applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The CoastColour project Round Robin (CCRR) project (http://www.coastcolour.org) funded by the European Space Agency (ESA) was designed to bring together a variety of reference datasets and to use these to test algorithms and assess their accuracy for retrieving water quality parameters. This information was then developed to help end-users of remote sensing products to select the most accurate algorithms for their coastal region. To facilitate this, an inter-comparison of the performance of algorithms for the retrieval of in-water properties over coastal waters was carried out. The comparison used three types of datasets on which ocean colour algorithms were tested. The description and comparison of the three datasets are the focus of this paper, and include the Medium Resolution Imaging Spectrometer (MERIS) Level 2 match-ups, in situ reflectance measurements and data generated by a radiative transfer model (HydroLight). The datasets mainly consisted of 6,484 marine reflectance associated with various geometrical (sensor viewing and solar angles) and sky conditions and water constituents: Total Suspended Matter (TSM) and Chlorophyll-a (CHL) concentrations, and the absorption of Coloured Dissolved Organic Matter (CDOM). Inherent optical properties were also provided in the simulated datasets (5,000 simulations) and from 3,054 match-up locations. The distributions of reflectance at selected MERIS bands and band ratios, CHL and TSM as a function of reflectance, from the three datasets are compared. Match-up and in situ sites where deviations occur are identified. The distribution of the three reflectance datasets are also compared to the simulated and in situ reflectances used previously by the International Ocean Colour Coordinating Group (IOCCG, 2006) for algorithm testing, showing a clear extension of the CCRR data which covers more turbid waters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article examines, from the energy viewpoint, a new lightweight, slim, high energy efficient, light-transmitting envelope system, providing for seamless, free-form designs for use in architectural projects. The research was based on envelope components already existing on the market, especially components implemented with granular silica gel insulation, as this is the most effective translucent thermal insulation there is today. The tests run on these materials revealed that there is not one that has all the features required of the new envelope model, although some do have properties that could be exploited to generate this envelope, namely, the vacuum chamber of vacuum insulated panels (VIP), the monolithic aerogel used as insulation in some prototypes, reinforced polyester barriers. By combining these three design components — the high-performance thermal insulation of the vacuum chamber combined with monolithic silica gel insulation, the free-form design potential provided by materials like reinforced polyester and epoxy resins—, we have been able to define and test a new, variable geometry, energy-saving envelope system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The high performance and capacity of current FPGAs makes them suitable as acceleration co-processors. This article studies the implementation, for such accelerators, of the floating-point power function xy as defined by the C99 and IEEE 754-2008 standards, generalized here to arbitrary exponent and mantissa sizes. Last-bit accuracy at the smallest possible cost is obtained thanks to a careful study of the various subcomponents: a floating-point logarithm, a modified floating-point exponential, and a truncated floating-point multiplier. A parameterized architecture generator in the open-source FloPoCo project is presented in details and evaluated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El análisis de imágenes hiperespectrales permite obtener información con una gran resolución espectral: cientos de bandas repartidas desde el espectro infrarrojo hasta el ultravioleta. El uso de dichas imágenes está teniendo un gran impacto en el campo de la medicina y, en concreto, destaca su utilización en la detección de distintos tipos de cáncer. Dentro de este campo, uno de los principales problemas que existen actualmente es el análisis de dichas imágenes en tiempo real ya que, debido al gran volumen de datos que componen estas imágenes, la capacidad de cómputo requerida es muy elevada. Una de las principales líneas de investigación acerca de la reducción de dicho tiempo de procesado se basa en la idea de repartir su análisis en diversos núcleos trabajando en paralelo. En relación a esta línea de investigación, en el presente trabajo se desarrolla una librería para el lenguaje RVC – CAL – lenguaje que está especialmente pensado para aplicaciones multimedia y que permite realizar la paralelización de una manera intuitiva – donde se recogen las funciones necesarias para implementar dos de las cuatro fases propias del procesado espectral: reducción dimensional y extracción de endmembers. Cabe mencionar que este trabajo se complementa con el realizado por Raquel Lazcano en su Proyecto Fin de Grado, donde se desarrollan las funciones necesarias para completar las otras dos fases necesarias en la cadena de desmezclado. En concreto, este trabajo se encuentra dividido en varias partes. La primera de ellas expone razonadamente los motivos que han llevado a comenzar este Proyecto Fin de Grado y los objetivos que se pretenden conseguir con él. Tras esto, se hace un amplio estudio del estado del arte actual y, en él, se explican tanto las imágenes hiperespectrales como los medios y las plataformas que servirán para realizar la división en núcleos y detectar las distintas problemáticas con las que nos podamos encontrar al realizar dicha división. Una vez expuesta la base teórica, nos centraremos en la explicación del método seguido para componer la cadena de desmezclado y generar la librería; un punto importante en este apartado es la utilización de librerías especializadas en operaciones matriciales complejas, implementadas en C++. Tras explicar el método utilizado, se exponen los resultados obtenidos primero por etapas y, posteriormente, con la cadena de procesado completa, implementada en uno o varios núcleos. Por último, se aportan una serie de conclusiones obtenidas tras analizar los distintos algoritmos en cuanto a bondad de resultados, tiempos de procesado y consumo de recursos y se proponen una serie de posibles líneas de actuación futuras relacionadas con dichos resultados. ABSTRACT. Hyperspectral imaging allows us to collect high resolution spectral information: hundred of bands covering from infrared to ultraviolet spectrum. These images have had strong repercussions in the medical field; in particular, we must highlight its use in cancer detection. In this field, the main problem we have to deal with is the real time analysis, because these images have a great data volume and they require a high computational power. One of the main research lines that deals with this problem is related with the analysis of these images using several cores working at the same time. According to this investigation line, this document describes the development of a RVC – CAL library – this language has been widely used for working with multimedia applications and allows an optimized system parallelization –, which joins all the functions needed to implement two of the four stages of the hyperspectral imaging processing chain: dimensionality reduction and endmember extraction. This research is complemented with the research conducted by Raquel Lazcano in her Diploma Project, where she studies the other two stages of the processing chain. The document is divided in several chapters. The first of them introduces the motivation of the Diploma Project and the main objectives to achieve. After that, we study the state of the art of some technologies related with this work, like hyperspectral images and the software and hardware that we will use to parallelize the system and to analyze its performance. Once we have exposed the theoretical bases, we will explain the followed methodology to compose the processing chain and to generate the library; one of the most important issues in this chapter is the use of some C++ libraries specialized in complex matrix operations. At this point, we will expose the results obtained in the individual stage analysis and then, the results of the full processing chain implemented in one or several cores. Finally, we will extract some conclusions related with algorithm behavior, time processing and system performance. In the same way, we propose some future research lines according to the results obtained in this document

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Las imágenes hiperespectrales permiten extraer información con una gran resolución espectral, que se suele extender desde el espectro ultravioleta hasta el infrarrojo. Aunque esta tecnología fue aplicada inicialmente a la observación de la superficie terrestre, esta característica ha hecho que, en los últimos años, la aplicación de estas imágenes se haya expandido a otros campos, como la medicina y, en concreto, la detección del cáncer. Sin embargo, este nuevo ámbito de aplicación ha generado nuevas necesidades, como la del procesado de las imágenes en tiempo real. Debido, precisamente, a la gran resolución espectral, estas imágenes requieren una elevada capacidad computacional para ser procesadas, lo que imposibilita la consecución de este objetivo con las técnicas tradicionales de procesado. En este sentido, una de las principales líneas de investigación persigue el objetivo del tiempo real mediante la paralelización del procesamiento, dividiendo esta carga computacional en varios núcleos que trabajen simultáneamente. A este respecto, en el presente documento se describe el desarrollo de una librería de procesado hiperespectral para el lenguaje RVC - CAL, que está específicamente pensado para el desarrollo de aplicaciones multimedia y proporciona las herramientas necesarias para paralelizar las aplicaciones. En concreto, en este Proyecto Fin de Grado se han desarrollado las funciones necesarias para implementar dos de las cuatro fases de la cadena de análisis de una imagen hiperespectral - en concreto, las fases de estimación del número de endmembers y de la estimación de la distribución de los mismos en la imagen -; conviene destacar que este trabajo se complementa con el realizado por Daniel Madroñal en su Proyecto Fin de Grado, donde desarrolla las funciones necesarias para completar las otras dos fases de la cadena. El presente documento sigue la estructura clásica de un trabajo de investigación, exponiendo, en primer lugar, las motivaciones que han cimentado este Proyecto Fin de Grado y los objetivos que se esperan alcanzar con él. A continuación, se realiza un amplio análisis del estado del arte de las tecnologías necesarias para su desarrollo, explicando, por un lado, las imágenes hiperespectrales y, por otro, todos los recursos hardware y software necesarios para la implementación de la librería. De esta forma, se proporcionarán todos los conceptos técnicos necesarios para el correcto seguimiento de este documento. Tras ello, se detallará la metodología seguida para la generación de la mencionada librería, así como el proceso de implementación de una cadena completa de procesado de imágenes hiperespectrales que permita la evaluación tanto de la bondad de la librería como del tiempo necesario para analizar una imagen hiperespectral completa. Una vez expuesta la metodología utilizada, se analizarán en detalle los resultados obtenidos en las pruebas realizadas; en primer lugar, se explicarán los resultados individuales extraídos del análisis de las dos etapas implementadas y, posteriormente, se discutirán los arrojados por el análisis de la ejecución de la cadena completa, tanto en uno como en varios núcleos. Por último, como resultado de este estudio se extraen una serie de conclusiones, que engloban aspectos como bondad de resultados, tiempos de ejecución y consumo de recursos; asimismo, se proponen una serie de líneas futuras de actuación con las que se podría continuar y complementar la investigación desarrollada en este documento. ABSTRACT. Hyperspectral imaging collects information from across the electromagnetic spectrum, covering a wide range of wavelengths. Although this technology was initially developed for remote sensing and earth observation, its multiple advantages - such as high spectral resolution - led to its application in other fields, as cancer detection. However, this new field has shown specific requirements; for example, it needs to accomplish strong time specifications, since all the potential applications - like surgical guidance or in vivo tumor detection - imply real-time requisites. Achieving this time requirements is a great challenge, as hyperspectral images generate extremely high volumes of data to process. For that reason, some new research lines are studying new processing techniques, and the most relevant ones are related to system parallelization: in order to reduce the computational load, this solution executes image analysis in several processors simultaneously; in that way, this computational load is divided among the different cores, and real-time specifications can be accomplished. This document describes the construction of a new hyperspectral processing library for RVC - CAL language, which is specifically designed for multimedia applications and allows multithreading compilation and system parallelization. This Diploma Project develops the required library functions to implement two of the four stages of the hyperspectral imaging processing chain - endmember and abundance estimations -. The two other stages - dimensionality reduction and endmember extraction - are studied in the Diploma Project of Daniel Madroñal, which complements the research work described in this document. The document follows the classical structure of a research work. Firstly, it introduces the motivations that have inspired this Diploma Project and the main objectives to achieve. After that, it thoroughly studies the state of the art of the technologies related to the development of the library. The state of the art contains all the concepts needed to understand the contents of this research work, like the definition and applications of hyperspectral imaging and the typical processing chain. Thirdly, it explains the methodology of the library implementation, as well as the construction of a complete processing chain in RVC - CAL applying the mentioned library. This chain will test both the correct behavior of the library and the time requirements for the complete analysis of one hyperspectral image, either executing the chain in one processor or in several ones. Afterwards, the collected results will be carefully analyzed: first of all, individual results -from endmember and abundance estimations stages - will be discussed and, after that, complete results will be studied; this results will be obtained from the complete processing chain, so they will analyze the effects of multithreading and system parallelization on the mentioned processing chain. Finally, as a result of this discussion, some conclusions will be gathered regarding some relevant aspects, such as algorithm behavior, execution times and processing performance. Likewise, this document will conclude with the proposal of some future research lines that could continue the research work described in this document.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The self-assembly of cobalt coordination frameworks (Co-CPs) with a two-dimensional morphology is demonstrated by a solvothermal method. The morphology of the Co-CPs has been controlled by various solvothermal conditions. The two-dimensional nanostructures agglomerated by Co3O4 nanoparticles remained after the pyrolysis of the Co-CPs. The as-synthesized Co3O4 anode material is characterized by cyclic voltammetry (CV), electrochemical impedance spectroscopy (EIS) and galvanostatic charge-discharge measurements. The morphology of Co3O4 plays a crucial role in the high performance anode materials for lithium batteries. The Co3O4 nanoparticles with opened-book morphology deliver a high capacity of 597 mA h g-1 after 50 cycles at a current rate of 800 mA g-1. The opened-book morphology of Co3O4 provides efficient lithium ion diffusion tunnels and increases the electrolyte/Co3O4 contact/interfacial area. At a relatively high current rate of 1200 mA g-1, Co3O4 with opened-book morphology delivers an excellent rate capability of 574 mA h g-1.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computing and information technology have made significant advances. The use of computing and technology is a major aspect of our lives, and this use will only continue to increase in our lifetime. Electronic digital computers and high performance communication networks are central to contemporary information technology. The computing applications in a wide range of areas including business, communications, medical research, transportation, entertainments, and education are transforming local and global societies around the globe. The rapid changes in the fields of computing and information technology also make the study of ethics exciting and challenging, as nearly every day, the media report on a new invention, controversy, or court ruling. This tutorial will explore a broad overview on the scientific foundations, technological advances, social implications, and ethical and legal issues related to computing. It will provide the milestones in computing and in networking, social context of computing, professional and ethical responsibilities, philosophical frameworks, and social, ethical, historical, and political implications of computer and information technology. It will outline the impact of the tremendous growth of computer and information technology on people, ethics and law. Political and legal implications will become clear when we analyze how technology has outpaced the legal and political arenas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The phenomenonal growth of the Internet has connected us to a vast amount of computation and information resources around the world. However, making use of these resources is difficult due to the unparalleled massiveness, high communication latency, share-nothing architecture and unreliable connection of the Internet. In this dissertation, we present a distributed software agent approach, which brings a new distributed problem-solving paradigm to the Internet computing researches with enhanced client-server scheme, inherent scalability and heterogeneity. Our study discusses the role of a distributed software agent in Internet computing and classifies it into three major categories by the objects it interacts with: computation agent, information agent and interface agent. The discussion of the problem domain and the deployment of the computation agent and the information agent are presented with the analysis, design and implementation of the experimental systems in high performance Internet computing and in scalable Web searching. ^ In the computation agent study, high performance Internet computing can be achieved with our proposed Java massive computation agent (JAM) model. We analyzed the JAM computing scheme and built a brutal force cipher text decryption prototype. In the information agent study, we discuss the scalability problem of the existing Web search engines and designed the approach of Web searching with distributed collaborative index agent. This approach can be used for constructing a more accurate, reusable and scalable solution to deal with the growth of the Web and of the information on the Web. ^ Our research reveals that with the deployment of the distributed software agent in Internet computing, we can have a more cost effective approach to make better use of the gigantic scale network of computation and information resources on the Internet. The case studies in our research show that we are now able to solve many practically hard or previously unsolvable problems caused by the inherent difficulties of Internet computing. ^