907 resultados para Integration of Programming Techniques
Resumo:
This paper presents a method of formally specifying, refining and verifying concurrent systems which uses the object-oriented state-based specification language Object-Z together with the process algebra CSP. Object-Z provides a convenient way of modelling complex data structures needed to define the component processes of such systems, and CSP enables the concise specification of process interactions. The basis of the integration is a semantics of Object-Z classes identical to that of CSP processes. This allows classes specified in Object-Z to he used directly within the CSP part of the specification. In addition to specification, we also discuss refinement and verification in this model. The common semantic basis enables a unified method of refinement to be used, based upon CSP refinement. To enable state-based techniques to be used fur the Object-Z components of a specification we develop state-based refinement relations which are sound and complete with respect to CSP refinement. In addition, a verification method for static and dynamic properties is presented. The method allows us to verify properties of the CSP system specification in terms of its component Object-Z classes by using the laws of the the CSP operators together with the logic for Object-Z.
Impact of a price-maker pumped storage hydro unit on the integration of wind energy in power systems
Resumo:
The increasing integration of larger amounts of wind energy into power systems raises important operational issues, such as the balance between power generation and demand. The pumped storage hydro (PSH) units are one possible solution to mitigate this problem, once they can store the excess of energy in the periods of higher generation and lower demand. However, the behaviour of a PSH unit may differ considerably from the expected in terms of wind power integration when it operates in a liberalized electricity market under a price-maker context. In this regard, this paper models and computes the optimal PSH weekly scheduling in a price-taker and price-maker scenarios, either when the PSH unit operates in standalone and integrated in a portfolio of other generation assets. Results show that the price-maker standalone PSH will integrate less wind power in comparison with the price-taker situation. Moreover, when the PSH unit is integrated in a portfolio with a base load power plant, the role of the price elasticity of demand may completely change the operational profile of the PSH unit. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
This paper describes a communication model to integrate repositories of programming problems with other e-Learning software components. The motivation for this work comes from the EduJudge project that aims to connect an existing repository of programming problems to learning management systems. When trying to use the existing repositories of learning objects we realized that they are mainly specialized search engines and lack features for integration with other e-Learning systems. With this model we intend to clarify the main features of a programming problem repository, in order to enable the design and development of software components that use it. The two main points of this model are the definition of programming problems as learning objects and the definition of the core functions exposed by the repository. In both cases, this model follows the existing specifications of the IMS standard and proposes extensions to deal with the special requirements of automatic evaluation and grading of programming exercises. In the definition of programming problems as learning objects we introduced a new schema for meta-data. This schema is used to represent meta-data related to automatic evaluation that cannot be conveniently represented using the standard: the type of automatic evaluation; the requirements of the evaluation engine; or the roles of different assets - tests cases, program solutions, etc. In the definition of the core functions we used two different web services flavours - SOAP and REST - and described each function as an operation for each type of interface. We describe also the data types of the arguments of each operation. These data types consist mainly on learning objects and their identifications, but include also usage reports and queries using XQuery.
Resumo:
RESUMO: O uso de ratinhos transgénicos em neurociências aumentou consideravelmente nos últimos anos devido ao crescente interesse em compreender o cérebro e a necessidade de solucionar situações clínicas do foro neurológico e psiquiátrico. Para esse efeito, diferentes métodos de produção de animais transgénicos têm sido testados. O objectivo desta tese foi comparar métodos de integração aleatória de um transgene no genoma de ratinhos em termos de eficiência, estabilidade da integração do transgene, número de animais e de horas de trabalho necessárias para cada método. Assim, foi comparado o método mais utilizado - microinjecção pronuclear (PNMI) - com duas outras técnicas cujo desempenho foi considerado promissor – a transferência génica através dos testículos por electroporação e transfecção por lentivírus in vivo. As três técnicas foram realizadas usando um gene repórter sob o controlo de um promotor constitutivo, e depois reproduzidas usando um gene de interesse de modo a permitir obtenção de um animal capaz de ser usado em experimentação laboratorial. O transgene de interesse utilizado codifica uma proteína de fusão correspondendo a uma variante da rodopsina (channelrhodopsin) fundida à proteína enhanced yellow fluorescente protein ((EYFP) resultando num produto designado ChR2-EYFP. Este animal transgénico apresentaria expressão deste canal iónico apenas em células dopaminergicas, o que, com manipulação optogenética, tornaria possivel a activação especifica deste grupo de neurónios e, simultaneamente, a observação do impacto desta manipulação no comportamento num animal em livre movimento. Estas ferramentas são importantes na investigação básica em neurociências pois ajudam a esclarecer o papel de grupos específicos de neurónios e compreender doenças como a doença de Parkinson ou a esquizofrenia onde a função de certos tipos de neurónios de encontra alterada. Quando comparados os três métodos realizados verifica-se que usando um gene repórter PMNI resulta em 31,3% de, a de animais transgénicos obtidos, a electroporação de testículos em 0% e a injecção de lentivírus em 0%. Quando usado o gene de interesse, os resultados obtidos são, respectivamente, 18,8%, 63,9% e 0%.--------------ABSTRACT: The use of transgenic mice is increasing in all fields of research, particularly in neuroscience, due to the widespread need of animal models to solve neurological and psychiatric medical conditions. Different methodologies have been tested in the last decades in order to produce such transgenic animals. The ultimate goal of this thesis is to compare different methods of random integration of a transgene in the genome of mice in terms of efficiency, stability of the transgene integration, number of animals required and the labour intensity of each technique. We compared the most used method – pronuclear microinjection (PNMI) – with two other promising techniques – Testis Mediated Gene Transfer (TMGT) by electroporation and in vivo lentiviral transfection. The three techniques were performed using a reporter gene – green fluorescent protein (GFP), whose transcription was driven by the constitutive cytomegalovirus (CMV) promoter. These three techniques were later reproduced using the tyrosine hydroxylase promoter (TH) and the neuronal manipulator, channelrhodopsin-2 fused to the enhanced yellow fluorescent reporter protein (ChR2-EYFP). The transgenic animal we sough to produce would express the light driven channel only in dopaminergic cells, making possible to specifically activate this group of neurons, while simultaneously observe the behaviour in a freely moving animal. This is a very important tool in basic neuroscience research since it helps to clarify the role of specific groups of neurons, map circuits in the brain, and consequently understand neurological diseases such as Parkinson’s disease or schizophrenia, where the function of certain types of neurons is affected. When comparing the three methods, it was verified that using a reporter gene PNMI resulted in 31.3% of transgenic mice obtained, testis electroporation in 0% and lentiviral injection in 0%. When using the gene of interest, the results obtained were, respectively, 18.8%, 63.9% and 0%.
Resumo:
Abstract Accurate characterization of the spatial distribution of hydrological properties in heterogeneous aquifers at a range of scales is a key prerequisite for reliable modeling of subsurface contaminant transport, and is essential for designing effective and cost-efficient groundwater management and remediation strategies. To this end, high-resolution geophysical methods have shown significant potential to bridge a critical gap in subsurface resolution and coverage between traditional hydrological measurement techniques such as borehole log/core analyses and tracer or pumping tests. An important and still largely unresolved issue, however, is how to best quantitatively integrate geophysical data into a characterization study in order to estimate the spatial distribution of one or more pertinent hydrological parameters, thus improving hydrological predictions. Recognizing the importance of this issue, the aim of the research presented in this thesis was to first develop a strategy for the assimilation of several types of hydrogeophysical data having varying degrees of resolution, subsurface coverage, and sensitivity to the hydrologic parameter of interest. In this regard a novel simulated annealing (SA)-based conditional simulation approach was developed and then tested in its ability to generate realizations of porosity given crosshole ground-penetrating radar (GPR) and neutron porosity log data. This was done successfully for both synthetic and field data sets. A subsequent issue that needed to be addressed involved assessing the potential benefits and implications of the resulting porosity realizations in terms of groundwater flow and contaminant transport. This was investigated synthetically assuming first that the relationship between porosity and hydraulic conductivity was well-defined. Then, the relationship was itself investigated in the context of a calibration procedure using hypothetical tracer test data. Essentially, the relationship best predicting the observed tracer test measurements was determined given the geophysically derived porosity structure. Both of these investigations showed that the SA-based approach, in general, allows much more reliable hydrological predictions than other more elementary techniques considered. Further, the developed calibration procedure was seen to be very effective, even at the scale of tomographic resolution, for predictions of transport. This also held true at locations within the aquifer where only geophysical data were available. This is significant because the acquisition of hydrological tracer test measurements is clearly more complicated and expensive than the acquisition of geophysical measurements. Although the above methodologies were tested using porosity logs and GPR data, the findings are expected to remain valid for a large number of pertinent combinations of geophysical and borehole log data of comparable resolution and sensitivity to the hydrological target parameter. Moreover, the obtained results allow us to have confidence for future developments in integration methodologies for geophysical and hydrological data to improve the 3-D estimation of hydrological properties.
Resumo:
This work is divided into three volumes: Volume I: Strain-Based Damage Detection; Volume II: Acceleration-Based Damage Detection; Volume III: Wireless Bridge Monitoring Hardware. Volume I: In this work, a previously-developed structural health monitoring (SHM) system was advanced toward a ready-for-implementation system. Improvements were made with respect to automated data reduction/analysis, data acquisition hardware, sensor types, and communication network architecture. The statistical damage-detection tool, control-chart-based damage-detection methodologies, were further investigated and advanced. For the validation of the damage-detection approaches, strain data were obtained from a sacrificial specimen attached to the previously-utilized US 30 Bridge over the South Skunk River (in Ames, Iowa), which had simulated damage,. To provide for an enhanced ability to detect changes in the behavior of the structural system, various control chart rules were evaluated. False indications and true indications were studied to compare the damage detection ability in regard to each methodology and each control chart rule. An autonomous software program called Bridge Engineering Center Assessment Software (BECAS) was developed to control all aspects of the damage detection processes. BECAS requires no user intervention after initial configuration and training. Volume II: In this work, a previously developed structural health monitoring (SHM) system was advanced toward a ready-for-implementation system. Improvements were made with respect to automated data reduction/analysis, data acquisition hardware, sensor types, and communication network architecture. The objective of this part of the project was to validate/integrate a vibration-based damage-detection algorithm with the strain-based methodology formulated by the Iowa State University Bridge Engineering Center. This report volume (Volume II) presents the use of vibration-based damage-detection approaches as local methods to quantify damage at critical areas in structures. Acceleration data were collected and analyzed to evaluate the relationships between sensors and with changes in environmental conditions. A sacrificial specimen was investigated to verify the damage-detection capabilities and this volume presents a transmissibility concept and damage-detection algorithm that show potential to sense local changes in the dynamic stiffness between points across a joint of a real structure. The validation and integration of the vibration-based and strain-based damage-detection methodologies will add significant value to Iowa’s current and future bridge maintenance, planning, and management Volume III: In this work, a previously developed structural health monitoring (SHM) system was advanced toward a ready-for-implementation system. Improvements were made with respect to automated data reduction/analysis, data acquisition hardware, sensor types, and communication network architecture. This report volume (Volume III) summarizes the energy harvesting techniques and prototype development for a bridge monitoring system that uses wireless sensors. The wireless sensor nodes are used to collect strain measurements at critical locations on a bridge. The bridge monitoring hardware system consists of a base station and multiple self-powered wireless sensor nodes. The base station is responsible for the synchronization of data sampling on all nodes and data aggregation. Each wireless sensor node include a sensing element, a processing and wireless communication module, and an energy harvesting module. The hardware prototype for a wireless bridge monitoring system was developed and tested on the US 30 Bridge over the South Skunk River in Ames, Iowa. The functions and performance of the developed system, including strain data, energy harvesting capacity, and wireless transmission quality, were studied and are covered in this volume.
Resumo:
Geophysical techniques can help to bridge the inherent gap with regard to spatial resolution and the range of coverage that plagues classical hydrological methods. This has lead to the emergence of the new and rapidly growing field of hydrogeophysics. Given the differing sensitivities of various geophysical techniques to hydrologically relevant parameters and their inherent trade-off between resolution and range the fundamental usefulness of multi-method hydrogeophysical surveys for reducing uncertainties in data analysis and interpretation is widely accepted. A major challenge arising from such endeavors is the quantitative integration of the resulting vast and diverse database in order to obtain a unified model of the probed subsurface region that is internally consistent with all available data. To address this problem, we have developed a strategy towards hydrogeophysical data integration based on Monte-Carlo-type conditional stochastic simulation that we consider to be particularly suitable for local-scale studies characterized by high-resolution and high-quality datasets. Monte-Carlo-based optimization techniques are flexible and versatile, allow for accounting for a wide variety of data and constraints of differing resolution and hardness and thus have the potential of providing, in a geostatistical sense, highly detailed and realistic models of the pertinent target parameter distributions. Compared to more conventional approaches of this kind, our approach provides significant advancements in the way that the larger-scale deterministic information resolved by the hydrogeophysical data can be accounted for, which represents an inherently problematic, and as of yet unresolved, aspect of Monte-Carlo-type conditional simulation techniques. We present the results of applying our algorithm to the integration of porosity log and tomographic crosshole georadar data to generate stochastic realizations of the local-scale porosity structure. Our procedure is first tested on pertinent synthetic data and then applied to corresponding field data collected at the Boise Hydrogeophysical Research Site near Boise, Idaho, USA.
Resumo:
Brain activity can be measured non-invasively with functional imaging techniques. Each pixel in such an image represents a neural mass of about 105 to 107 neurons. Mean field models (MFMs) approximate their activity by averaging out neural variability while retaining salient underlying features, like neurotransmitter kinetics. However, MFMs incorporating the regional variability, realistic geometry and connectivity of cortex have so far appeared intractable. This lack of biological realism has led to a focus on gross temporal features of the EEG. We address these impediments and showcase a "proof of principle" forward prediction of co-registered EEG/fMRI for a full-size human cortex in a realistic head model with anatomical connectivity, see figure 1. MFMs usually assume homogeneous neural masses, isotropic long-range connectivity and simplistic signal expression to allow rapid computation with partial differential equations. But these approximations are insufficient in particular for the high spatial resolution obtained with fMRI, since different cortical areas vary in their architectonic and dynamical properties, have complex connectivity, and can contribute non-trivially to the measured signal. Our code instead supports the local variation of model parameters and freely chosen connectivity for many thousand triangulation nodes spanning a cortical surface extracted from structural MRI. This allows the introduction of realistic anatomical and physiological parameters for cortical areas and their connectivity, including both intra- and inter-area connections. Proper cortical folding and conduction through a realistic head model is then added to obtain accurate signal expression for a comparison to experimental data. To showcase the synergy of these computational developments, we predict simultaneously EEG and fMRI BOLD responses by adding an established model for neurovascular coupling and convolving "Balloon-Windkessel" hemodynamics. We also incorporate regional connectivity extracted from the CoCoMac database [1]. Importantly, these extensions can be easily adapted according to future insights and data. Furthermore, while our own simulation is based on one specific MFM [2], the computational framework is general and can be applied to models favored by the user. Finally, we provide a brief outlook on improving the integration of multi-modal imaging data through iterative fits of a single underlying MFM in this realistic simulation framework.
Resumo:
A myriad of methods are available for virtual screening of small organic compound databases. In this study we have successfully applied a quantitative model of consensus measurements, using a combination of 3D similarity searches (ROCS and EON), Hologram Quantitative Structure Activity Relationships (HQSAR) and docking (FRED, FlexX, Glide and AutoDock Vina), to retrieve cruzain inhibitors from collected databases. All methods were assessed individually and then combined in a Ligand-Based Virtual Screening (LBVS) and Target-Based Virtual Screening (TBVS) consensus scoring, using Receiving Operating Characteristic (ROC) curves to evaluate their performance. Three consensus strategies were used: scaled-rank-by-number, rank-by-rank and rank-by-vote, with the most thriving the scaled-rank-by-number strategy, considering that the stiff ROC curve appeared to be satisfactory in every way to indicate a higher enrichment power at early retrieval of active compounds from the database. The ligand-based method provided access to a robust and predictive HQSAR model that was developed to show superior discrimination between active and inactive compounds, which was also better than ROCS and EON procedures. Overall, the integration of fast computational techniques based on ligand and target structures resulted in a more efficient retrieval of cruzain inhibitors with desired pharmacological profiles that may be useful to advance the discovery of new trypanocidal agents.
Resumo:
Purpose: This study tested the hypothesis that early integration of plateau root form endosseous implants is significantly affected by surgical drilling technique.Materials and Methods: Sixty-four implants were bilaterally placed in the diaphysial radius of 8 beagles and remained 2 and 4 weeks in vivo. Half the implants had an alumina-blasted/acid-etched surface and the other half a surface coated with calcium phosphate. Half the implants with the 2 surface types were drilled at 50 rpm without saline irrigation and the other half were drilled at 900 rpm under abundant irrigation. After euthanasia, the implants in bone were nondecalcified and referred for histologic analysis. Bone-to-implant contact, bone area fraction occupancy, and the distance from the tip of the plateau to pristine cortical bone were measured. Statistical analyses were performed by analysis of variance at a 95% level of significance considering implant surface, time in vivo, and drilling speed as independent variables and bone-to-implant contact, bone area fraction occupancy, and distance from the tip of the plateau to pristine cortical bone as dependent variables.Results: The results showed that both techniques led to implant integration and intimate contact between bone and the 2 implant surfaces. A significant increase in bone-to-implant contact and bone area fraction occupancy was observed as time elapsed at 2 and 4 weeks and for the calcium phosphate-coated implant surface compared with the alumina-blasted/acid-etched surface.Conclusions: Because the surgical drilling technique did not affect the early integration of plateau root form implants, the hypothesis was refuted. (C) 2011 American Association of Oral and Maxillofacial Surgeons J Oral Maxillofac Surg 69: 2158-2163, 2011
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The objective of this work was to evaluate extreme water table depths in a watershed, using methods for geographical spatial data analysis. Groundwater spatio-temporal dynamics was evaluated in an outcrop of the Guarani Aquifer System. Water table depths were estimated from monitoring of water levels in 23 piezometers and time series modeling available from April 2004 to April 2011. For generation of spatial scenarios, geostatistical techniques were used, which incorporated into the prediction ancillary information related to the geomorphological patterns of the watershed, using a digital elevation model. This procedure improved estimates, due to the high correlation between water levels and elevation, and aggregated physical sense to predictions. The scenarios showed differences regarding the extreme levels - too deep or too shallow ones - and can subsidize water planning, efficient water use, and sustainable water management in the watershed.
Resumo:
Mixed integer programming is up today one of the most widely used techniques for dealing with hard optimization problems. On the one side, many practical optimization problems arising from real-world applications (such as, e.g., scheduling, project planning, transportation, telecommunications, economics and finance, timetabling, etc) can be easily and effectively formulated as Mixed Integer linear Programs (MIPs). On the other hand, 50 and more years of intensive research has dramatically improved on the capability of the current generation of MIP solvers to tackle hard problems in practice. However, many questions are still open and not fully understood, and the mixed integer programming community is still more than active in trying to answer some of these questions. As a consequence, a huge number of papers are continuously developed and new intriguing questions arise every year. When dealing with MIPs, we have to distinguish between two different scenarios. The first one happens when we are asked to handle a general MIP and we cannot assume any special structure for the given problem. In this case, a Linear Programming (LP) relaxation and some integrality requirements are all we have for tackling the problem, and we are ``forced" to use some general purpose techniques. The second one happens when mixed integer programming is used to address a somehow structured problem. In this context, polyhedral analysis and other theoretical and practical considerations are typically exploited to devise some special purpose techniques. This thesis tries to give some insights in both the above mentioned situations. The first part of the work is focused on general purpose cutting planes, which are probably the key ingredient behind the success of the current generation of MIP solvers. Chapter 1 presents a quick overview of the main ingredients of a branch-and-cut algorithm, while Chapter 2 recalls some results from the literature in the context of disjunctive cuts and their connections with Gomory mixed integer cuts. Chapter 3 presents a theoretical and computational investigation of disjunctive cuts. In particular, we analyze the connections between different normalization conditions (i.e., conditions to truncate the cone associated with disjunctive cutting planes) and other crucial aspects as cut rank, cut density and cut strength. We give a theoretical characterization of weak rays of the disjunctive cone that lead to dominated cuts, and propose a practical method to possibly strengthen those cuts arising from such weak extremal solution. Further, we point out how redundant constraints can affect the quality of the generated disjunctive cuts, and discuss possible ways to cope with them. Finally, Chapter 4 presents some preliminary ideas in the context of multiple-row cuts. Very recently, a series of papers have brought the attention to the possibility of generating cuts using more than one row of the simplex tableau at a time. Several interesting theoretical results have been presented in this direction, often revisiting and recalling other important results discovered more than 40 years ago. However, is not clear at all how these results can be exploited in practice. As stated, the chapter is a still work-in-progress and simply presents a possible way for generating two-row cuts from the simplex tableau arising from lattice-free triangles and some preliminary computational results. The second part of the thesis is instead focused on the heuristic and exact exploitation of integer programming techniques for hard combinatorial optimization problems in the context of routing applications. Chapters 5 and 6 present an integer linear programming local search algorithm for Vehicle Routing Problems (VRPs). The overall procedure follows a general destroy-and-repair paradigm (i.e., the current solution is first randomly destroyed and then repaired in the attempt of finding a new improved solution) where a class of exponential neighborhoods are iteratively explored by heuristically solving an integer programming formulation through a general purpose MIP solver. Chapters 7 and 8 deal with exact branch-and-cut methods. Chapter 7 presents an extended formulation for the Traveling Salesman Problem with Time Windows (TSPTW), a generalization of the well known TSP where each node must be visited within a given time window. The polyhedral approaches proposed for this problem in the literature typically follow the one which has been proven to be extremely effective in the classical TSP context. Here we present an overall (quite) general idea which is based on a relaxed discretization of time windows. Such an idea leads to a stronger formulation and to stronger valid inequalities which are then separated within the classical branch-and-cut framework. Finally, Chapter 8 addresses the branch-and-cut in the context of Generalized Minimum Spanning Tree Problems (GMSTPs) (i.e., a class of NP-hard generalizations of the classical minimum spanning tree problem). In this chapter, we show how some basic ideas (and, in particular, the usage of general purpose cutting planes) can be useful to improve on branch-and-cut methods proposed in the literature.
Resumo:
Molecules are the smallest possible elements for electronic devices, with active elements for such devices typically a few Angstroms in footprint area. Owing to the possibility of producing ultrahigh density devices, tremendous effort has been invested in producing electronic junctions by using various types of molecules. The major issues for molecular electronics include (1) developing an effective scheme to connect molecules with the present micro- and nano-technology, (2) increasing the lifetime and stabilities of the devices, and (3) increasing their performance in comparison to the state-of-the-art devices. In this work, we attempt to use carbon nanotubes (CNTs) as the interconnecting nanoelectrodes between molecules and microelectrodes. The ultimate goal is to use two individual CNTs to sandwich molecules in a cross-bar configuration while having these CNTs connected with microelectrodes such that the junction displays the electronic character of the molecule chosen. We have successfully developed an effective scheme to connect molecules with CNTs, which is scalable to arrays of molecular electronic devices. To realize this far reaching goal, the following technical topics have been investigated. 1. Synthesis of multi-walled carbon nanotubes (MWCNTs) by thermal chemical vapor deposition (T-CVD) and plasma-enhanced chemical vapor deposition (PECVD) techniques (Chapter 3). We have evaluated the potential use of tubular and bamboo-like MWCNTs grown by T-CVD and PE-CVD in terms of their structural properties. 2. Horizontal dispersion of MWCNTs with and without surfactants, and the integration of MWCNTs to microelectrodes using deposition by dielectrophoresis (DEP) (Chapter 4). We have systematically studied the use of surfactant molecules to disperse and horizontally align MWCNTs on substrates. In addition, DEP is shown to produce impurityfree placement of MWCNTs, forming connections between microelectrodes. We demonstrate the deposition density is tunable by both AC field strength and AC field frequency. 3. Etching of MWCNTs for the impurity-free nanoelectrodes (Chapter 5). We show that the residual Ni catalyst on MWCNTs can be removed by acid etching; the tip removal and collapsing of tubes into pyramids enhances the stability of field emission from the tube arrays. The acid-etching process can be used to functionalize the MWCNTs, which was used to make our initial CNT-nanoelectrode glucose sensors. Finally, lessons learned trying to perform spectroscopic analysis of the functionalized MWCNTs were vital for designing our final devices. 4. Molecular junction design and electrochemical synthesis of biphenyl molecules on carbon microelectrodes for all-carbon molecular devices (Chapter 6). Utilizing the experience gained on the work done so far, our final device design is described. We demonstrate the capability of preparing patterned glassy carbon films to serve as the bottom electrode in the new geometry. However, the molecular switching behavior of biphenyl was not observed by scanning tunneling microscopy (STM), mercury drop or fabricated glassy carbon/biphenyl/MWCNT junctions. Either the density of these molecules is not optimum for effective integration of devices using MWCNTs as the nanoelectrodes, or an electroactive contaminant was reduced instead of the ionic biphenyl species. 5. Self-assembly of octadecanethiol (ODT) molecules on gold microelectrodes for functional molecular devices (Chapter 7). We have realized an effective scheme to produce Au/ODT/MWCNT junctions by spanning MWCNTs across ODT-functionalized microelectrodes. A percentage of the resulting junctions retain the expected character of an ODT monolayer. While the process is not yet optimized, our successful junctions show that molecular electronic devices can be fabricated using simple processes such as photolithography, self-assembled monolayers and dielectrophoresis.
Resumo:
The objective of this doctoral research is to investigate the internal frost damage due to crystallization pore pressure in porous cement-based materials by developing computational and experimental characterization tools. As an essential component of the U.S. infrastructure system, the durability of concrete has significant impact on maintenance costs. In cold climates, freeze-thaw damage is a major issue affecting the durability of concrete. The deleterious effects of the freeze-thaw cycle depend on the microscale characteristics of concrete such as the pore sizes and the pore distribution, as well as the environmental conditions. Recent theories attribute internal frost damage of concrete is caused by crystallization pore pressure in the cold environment. The pore structures have significant impact on freeze-thaw durability of cement/concrete samples. The scanning electron microscope (SEM) and transmission X-ray microscopy (TXM) techniques were applied to characterize freeze-thaw damage within pore structure. In the microscale pore system, the crystallization pressures at sub-cooling temperatures were calculated using interface energy balance with thermodynamic analysis. The multi-phase Extended Finite Element Modeling (XFEM) and bilinear Cohesive Zone Modeling (CZM) were developed to simulate the internal frost damage of heterogeneous cement-based material samples. The fracture simulation with these two techniques were validated by comparing the predicted fracture behavior with the captured damage from compact tension (CT) and single-edge notched beam (SEB) bending tests. The study applied the developed computational tools to simulate the internal frost damage caused by ice crystallization with the two dimensional (2-D) SEM and three dimensional (3-D) reconstructed SEM and TXM digital samples. The pore pressure calculated from thermodynamic analysis was input for model simulation. The 2-D and 3-D bilinear CZM predicted the crack initiation and propagation within cement paste microstructure. The favorably predicted crack paths in concrete/cement samples indicate the developed bilinear CZM techniques have the ability to capture crack nucleation and propagation in cement-based material samples with multiphase and associated interface. By comparing the computational prediction with the actual damaged samples, it also indicates that the ice crystallization pressure is the main mechanism for the internal frost damage in cementitious materials.