941 resultados para Biofertilizer and optimization
Resumo:
The flavivirus West Nile virus (WNV) has spread rapidly throughout the world in recent years causing fever, meningitis, encephalitis, and fatalities. Because the viral protease NS2B/NS3 is essential for replication, it is attracting attention as a potential therapeutic target, although there are currently no antiviral inhibitors for any flavivirus. This paper focuses on elucidating interactions between a hexapeptide substrate (Ae-KPGLKR-p-nitroanilide) and residues at S1 and S2 in the active site of WNV protease by comparing the catalytic activities of selected mutant recombinant proteases in vitro. Homology modeling enabled the predictions of key mutations in VWNV NS3 protease at S1 (V115A/F, D129A/ E/N, S135A, Y150A/F, S160A, and S163A) and S2 (N152A) that might influence substrate recognition and catalytic efficiency. Key conclusions are that the substrate P1 Arg strongly interacts with S1 residues Asp-129, Tyr-150, and Ser-163 and, to a lesser extent, Ser-160, and P2 Lys makes an essential interaction with Asn-152 at S2. The inferred substrate-enzyme interactions provide a basis for rational protease inhibitor design and optimization. High sequence conservation within flavivirus proteases means that this study may also be relevant to design of protease inhibitors for other flavivirus proteases.
Resumo:
A method has been constructed for the solution of a wide range of chemical plant simulation models including differential equations and optimization. Double orthogonal collocation on finite elements is applied to convert the model into an NLP problem that is solved either by the VF 13AD package based on successive quadratic programming, or by the GRG2 package, based on the generalized reduced gradient method. This approach is termed simultaneous optimization and solution strategy. The objective functional can contain integral terms. The state and control variables can have time delays. Equalities and inequalities containing state and control variables can be included into the model as well as algebraic equations and inequalities. The maximum number of independent variables is 2. Problems containing 3 independent variables can be transformed into problems having 2 independent variables using finite differencing. The maximum number of NLP variables and constraints is 1500. The method is also suitable for solving ordinary and partial differential equations. The state functions are approximated by a linear combination of Lagrange interpolation polynomials. The control function can either be approximated by a linear combination of Lagrange interpolation polynomials or by a piecewise constant function over finite elements. The number of internal collocation points can vary by finite elements. The residual error is evaluated at arbitrarily chosen equidistant grid-points, thus enabling the user to check the accuracy of the solution between collocation points, where the solution is exact. The solution functions can be tabulated. There is an option to use control vector parameterization to solve optimization problems containing initial value ordinary differential equations. When there are many differential equations or the upper integration limit should be selected optimally then this approach should be used. The portability of the package has been addressed converting the package from V AX FORTRAN 77 into IBM PC FORTRAN 77 and into SUN SPARC 2000 FORTRAN 77. Computer runs have shown that the method can reproduce optimization problems published in the literature. The GRG2 and the VF I 3AD packages, integrated into the optimization package, proved to be robust and reliable. The package contains an executive module, a module performing control vector parameterization and 2 nonlinear problem solver modules, GRG2 and VF I 3AD. There is a stand-alone module that converts the differential-algebraic optimization problem into a nonlinear programming problem.
Resumo:
We consider data losses in a single node of a packet- switched Internet-like network. We employ two distinct models, one with discrete and the other with continuous one-dimensional random walks, representing the state of a queue in a router. Both models have a built-in critical behavior with a sharp transition from exponentially small to finite losses. It turns out that the finite capacity of a buffer and the packet-dropping procedure give rise to specific boundary conditions which lead to strong loss rate fluctuations at the critical point even in the absence of such fluctuations in the data arrival process.
Resumo:
Today, databases have become an integral part of information systems. In the past two decades, we have seen different database systems being developed independently and used in different applications domains. Today's interconnected networks and advanced applications, such as data warehousing, data mining & knowledge discovery and intelligent data access to information on the Web, have created a need for integrated access to such heterogeneous, autonomous, distributed database systems. Heterogeneous/multidatabase research has focused on this issue resulting in many different approaches. However, a single, generally accepted methodology in academia or industry has not emerged providing ubiquitous intelligent data access from heterogeneous, autonomous, distributed information sources. ^ This thesis describes a heterogeneous database system being developed at High-performance Database Research Center (HPDRC). A major impediment to ubiquitous deployment of multidatabase technology is the difficulty in resolving semantic heterogeneity. That is, identifying related information sources for integration and querying purposes. Our approach considers the semantics of the meta-data constructs in resolving this issue. The major contributions of the thesis work include: (i) providing a scalable, easy-to-implement architecture for developing a heterogeneous multidatabase system, utilizing Semantic Binary Object-oriented Data Model (Sem-ODM) and Semantic SQL query language to capture the semantics of the data sources being integrated and to provide an easy-to-use query facility; (ii) a methodology for semantic heterogeneity resolution by investigating into the extents of the meta-data constructs of component schemas. This methodology is shown to be correct, complete and unambiguous; (iii) a semi-automated technique for identifying semantic relations, which is the basis of semantic knowledge for integration and querying, using shared ontologies for context-mediation; (iv) resolutions for schematic conflicts and a language for defining global views from a set of component Sem-ODM schemas; (v) design of a knowledge base for storing and manipulating meta-data and knowledge acquired during the integration process. This knowledge base acts as the interface between integration and query processing modules; (vi) techniques for Semantic SQL query processing and optimization based on semantic knowledge in a heterogeneous database environment; and (vii) a framework for intelligent computing and communication on the Internet applying the concepts of our work. ^
Resumo:
Accurately assessing the extent of myocardial tissue injury induced by Myocardial infarction (MI) is critical to the planning and optimization of MI patient management. With this in mind, this study investigated the feasibility of using combined fluorescence and diffuse reflectance spectroscopy to characterize a myocardial infarct at the different stages of its development. An animal study was conducted using twenty male Sprague-Dawley rats with MI. In vivo fluorescence spectra at 337 nm excitation and diffuse reflectance between 400 nm and 900 nm were measured from the heart using a portable fiber-optic spectroscopic system. Spectral acquisition was performed on (1) the normal heart region; (2) the region immediately surrounding the infarct; and (3) the infarcted region—one, two, three and four weeks into MI development. The spectral data were divided into six subgroups according to the histopathological features associated with various degrees/severities of myocardial tissue injury as well as various stages of myocardial tissue remodeling, post infarction. Various data processing and analysis techniques were employed to recognize the representative spectral features corresponding to various histopathological features associated with myocardial infarction. The identified spectral features were utilized in discriminant analysis to further evaluate their effectiveness in classifying tissue injuries induced by MI. In this study, it was observed that MI induced significant alterations (p < 0.05) in the diffuse reflectance spectra, especially between 450 nm and 600 nm, from myocardial tissue within the infarcted and surrounding regions. In addition, MI induced a significant elevation in fluorescence intensities at 400 and 460 nm from the myocardial tissue from the same regions. The extent of these spectral alterations was related to the duration of the infarction. Using the spectral features identified, an effective tissue injury classification algorithm was developed which produced a satisfactory overall classification result (87.8%). The findings of this research support the concept that optical spectroscopy represents a useful tool to non-invasively determine the in vivo pathophysiological features of a myocardial infarct and its surrounding tissue, thereby providing valuable real-time feedback to surgeons during various surgical interventions for MI.
Resumo:
Accurately assessing the extent of myocardial tissue injury induced by Myocardial infarction (MI) is critical to the planning and optimization of MI patient management. With this in mind, this study investigated the feasibility of using combined fluorescence and diffuse reflectance spectroscopy to characterize a myocardial infarct at the different stages of its development. An animal study was conducted using twenty male Sprague-Dawley rats with MI. In vivo fluorescence spectra at 337 nm excitation and diffuse reflectance between 400 nm and 900 nm were measured from the heart using a portable fiber-optic spectroscopic system. Spectral acquisition was performed on - (1) the normal heart region; (2) the region immediately surrounding the infarct; and (3) the infarcted region - one, two, three and four weeks into MI development. The spectral data were divided into six subgroups according to the histopathological features associated with various degrees / severities of myocardial tissue injury as well as various stages of myocardial tissue remodeling, post infarction. Various data processing and analysis techniques were employed to recognize the representative spectral features corresponding to various histopathological features associated with myocardial infarction. The identified spectral features were utilized in discriminant analysis to further evaluate their effectiveness in classifying tissue injuries induced by MI. In this study, it was observed that MI induced significant alterations (p < 0.05) in the diffuse reflectance spectra, especially between 450 nm and 600 nm, from myocardial tissue within the infarcted and surrounding regions. In addition, MI induced a significant elevation in fluorescence intensities at 400 and 460 nm from the myocardial tissue from the same regions. The extent of these spectral alterations was related to the duration of the infarction. Using the spectral features identified, an effective tissue injury classification algorithm was developed which produced a satisfactory overall classification result (87.8%). The findings of this research support the concept that optical spectroscopy represents a useful tool to non-invasively determine the in vivo pathophysiological features of a myocardial infarct and its surrounding tissue, thereby providing valuable real-time feedback to surgeons during various surgical interventions for MI.
Resumo:
Today, databases have become an integral part of information systems. In the past two decades, we have seen different database systems being developed independently and used in different applications domains. Today's interconnected networks and advanced applications, such as data warehousing, data mining & knowledge discovery and intelligent data access to information on the Web, have created a need for integrated access to such heterogeneous, autonomous, distributed database systems. Heterogeneous/multidatabase research has focused on this issue resulting in many different approaches. However, a single, generally accepted methodology in academia or industry has not emerged providing ubiquitous intelligent data access from heterogeneous, autonomous, distributed information sources. This thesis describes a heterogeneous database system being developed at Highperformance Database Research Center (HPDRC). A major impediment to ubiquitous deployment of multidatabase technology is the difficulty in resolving semantic heterogeneity. That is, identifying related information sources for integration and querying purposes. Our approach considers the semantics of the meta-data constructs in resolving this issue. The major contributions of the thesis work include: (i.) providing a scalable, easy-to-implement architecture for developing a heterogeneous multidatabase system, utilizing Semantic Binary Object-oriented Data Model (Sem-ODM) and Semantic SQL query language to capture the semantics of the data sources being integrated and to provide an easy-to-use query facility; (ii.) a methodology for semantic heterogeneity resolution by investigating into the extents of the meta-data constructs of component schemas. This methodology is shown to be correct, complete and unambiguous; (iii.) a semi-automated technique for identifying semantic relations, which is the basis of semantic knowledge for integration and querying, using shared ontologies for context-mediation; (iv.) resolutions for schematic conflicts and a language for defining global views from a set of component Sem-ODM schemas; (v.) design of a knowledge base for storing and manipulating meta-data and knowledge acquired during the integration process. This knowledge base acts as the interface between integration and query processing modules; (vi.) techniques for Semantic SQL query processing and optimization based on semantic knowledge in a heterogeneous database environment; and (vii.) a framework for intelligent computing and communication on the Internet applying the concepts of our work.
Resumo:
Knowledge-based radiation treatment is an emerging concept in radiotherapy. It
mainly refers to the technique that can guide or automate treatment planning in
clinic by learning from prior knowledge. Dierent models are developed to realize
it, one of which is proposed by Yuan et al. at Duke for lung IMRT planning. This
model can automatically determine both beam conguration and optimization ob-
jectives with non-coplanar beams based on patient-specic anatomical information.
Although plans automatically generated by this model demonstrate equivalent or
better dosimetric quality compared to clinical approved plans, its validity and gener-
ality are limited due to the empirical assignment to a coecient called angle spread
constraint dened in the beam eciency index used for beam ranking. To eliminate
these limitations, a systematic study on this coecient is needed to acquire evidences
for its optimal value.
To achieve this purpose, eleven lung cancer patients with complex tumor shape
with non-coplanar beams adopted in clinical approved plans were retrospectively
studied in the frame of the automatic lung IMRT treatment algorithm. The primary
and boost plans used in three patients were treated as dierent cases due to the
dierent target size and shape. A total of 14 lung cases, thus, were re-planned using
the knowledge-based automatic lung IMRT planning algorithm by varying angle
spread constraint from 0 to 1 with increment of 0.2. A modied beam angle eciency
index used for navigate the beam selection was adopted. Great eorts were made to assure the quality of plans associated to every angle spread constraint as good
as possible. Important dosimetric parameters for PTV and OARs, quantitatively
re
ecting the plan quality, were extracted from the DVHs and analyzed as a function
of angle spread constraint for each case. Comparisons of these parameters between
clinical plans and model-based plans were evaluated by two-sampled Students t-tests,
and regression analysis on a composite index built on the percentage errors between
dosimetric parameters in the model-based plans and those in the clinical plans as a
function of angle spread constraint was performed.
Results show that model-based plans generally have equivalent or better quality
than clinical approved plans, qualitatively and quantitatively. All dosimetric param-
eters except those for lungs in the automatically generated plans are statistically
better or comparable to those in the clinical plans. On average, more than 15% re-
duction on conformity index and homogeneity index for PTV and V40, V60 for heart
while an 8% and 3% increase on V5, V20 for lungs, respectively, are observed. The
intra-plan comparison among model-based plans demonstrates that plan quality does
not change much with angle spread constraint larger than 0.4. Further examination
on the variation curve of the composite index as a function of angle spread constraint
shows that 0.6 is the optimal value that can result in statistically the best achievable
plans.
Resumo:
The coupling of mechanical stress fields in polymers to covalent chemistry (polymer mechanochemistry) has provided access to previously unattainable chemical reactions and polymer transformations. In the bulk, mechanochemical activation has been used as the basis for new classes of stress-responsive polymers that demonstrate stress/strain sensing, shear-induced intermolecular reactivity for molecular level remodeling and self-strengthening, and the release of acids and other small molecules that are potentially capable of triggering further chemical response. The potential utility of polymer mechanochemistry in functional materials is limited, however, by the fact that to date, all reported covalent activation in the bulk occurs in concert with plastic yield and deformation, so that the structure of the activated object is vastly different from its nascent form. Mechanochemically activated materials have thus been limited to “single use” demonstrations, rather than as multi-functional materials for structural and/or device applications. Here, we report that filled polydimethylsiloxane (PDMS) elastomers provide a robust elastic substrate into which mechanophores can be embedded and activated under conditions from which the sample regains its original shape and properties. Fabrication is straightforward and easily accessible, providing access for the first time to objects and devices that either release or reversibly activate chemical functionality over hundreds of loading cycles.
While the mechanically accelerated ring-opening reaction of spiropyran to merocyanine and associated color change provides a useful method by which to image the molecular scale stress/strain distribution within a polymer, the magnitude of the forces necessary for activation had yet to be quantified. Here, we report single molecule force spectroscopy studies of two spiropyran isomers. Ring opening on the timescale of tens of milliseconds is found to require forces of ~240 pN, well below that of previously characterized covalent mechanophores. The lower threshold force is a combination of a low force-free activation energy and the fact that the change in rate with force (activation length) of each isomer is greater than that inferred in other systems. Importantly, quantifying the magnitude of forces required to activate individual spiropyran-based force-probes enables the probe behave as a “scout” of molecular forces in materials; the observed behavior of which can be extrapolated to predict the reactivity of potential mechanophores within a given material and deformation.
We subsequently translated the design platform to existing dynamic soft technologies to fabricate the first mechanochemically responsive devices; first, by remotely inducing dielectric patterning of an elastic substrate to produce assorted fluorescent patterns in concert with topological changes; and second, by adopting a soft robotic platform to produce a color change from the strains inherent to pneumatically actuated robotic motion. Shown herein, covalent polymer mechanochemistry provides a viable mechanism to convert the same mechanical potential energy used for actuation into value-added, constructive covalent chemical responses. The color change associated with actuation suggests opportunities for not only new color changing or camouflaging strategies, but also the possibility for simultaneous activation of latent chemistry (e.g., release of small molecules, change in mechanical properties, activation of catalysts, etc.) in soft robots. In addition, mechanochromic stress mapping in a functional actuating device might provide a useful design and optimization tool, revealing spatial and temporal force evolution within the actuator in a way that might also be coupled to feedback loops that allow autonomous, self-regulation of activity.
In the future, both the specific material and the general approach should be useful in enriching the responsive functionality of soft elastomeric materials and devices. We anticipate the development of new mechanophores that, like the materials, are reversibly and repeatedly activated, expanding the capabilities of soft, active devices and further permitting dynamic control over chemical reactivity that is otherwise inaccessible, each in response to a single remote signal.
Resumo:
This article introduces the Evaluation Framework EFI for the Impact Measurement of learning, education and training: The Evaluation Framework for Impact Measurement was developed for specifying the evaluation phase and its objectives and tasks within the IDEAL Reference Model for the introduction and optimization of quality development within learning, education and training. First, a description of the Evaluation Framework for Impact Measurement will be provided, followed by a brief overview of the IDEAL Reference Model. Finally, an example for the implementation of the Evaluation Framework for Impact Measurement within the ARISTOTELE project is presented.
Resumo:
Due to the variability and stochastic nature of wind power system, accurate wind power forecasting has an important role in developing reliable and economic power system operation and control strategies. As wind variability is stochastic, Gaussian Process regression has recently been introduced to capture the randomness of wind energy. However, the disadvantages of Gaussian Process regression include its computation complexity and incapability to adapt to time varying time-series systems. A variant Gaussian Process for time series forecasting is introduced in this study to address these issues. This new method is shown to be capable of reducing computational complexity and increasing prediction accuracy. It is further proved that the forecasting result converges as the number of available data approaches innite. Further, a teaching learning based optimization (TLBO) method is used to train the model and to accelerate
the learning rate. The proposed modelling and optimization method is applied to forecast both the wind power generation of Ireland and that from a single wind farm to show the eectiveness of the proposed method.
Resumo:
Wireless sensor networks (WSNs) differ from conventional distributed systems in many aspects. The resource limitation of sensor nodes, the ad-hoc communication and topology of the network, coupled with an unpredictable deployment environment are difficult non-functional constraints that must be carefully taken into account when developing software systems for a WSN. Thus, more research needs to be done on designing, implementing and maintaining software for WSNs. This thesis aims to contribute to research being done in this area by presenting an approach to WSN application development that will improve the reusability, flexibility, and maintainability of the software. Firstly, we present a programming model and software architecture aimed at describing WSN applications, independently of the underlying operating system and hardware. The proposed architecture is described and realized using the Model-Driven Architecture (MDA) standard in order to achieve satisfactory levels of encapsulation and abstraction when programming sensor nodes. Besides, we study different non-functional constrains of WSN application and propose two approaches to optimize the application to satisfy these constrains. A real prototype framework was built to demonstrate the developed solutions in the thesis. The framework implemented the programming model and the multi-layered software architecture as components. A graphical interface, code generation components and supporting tools were also included to help developers design, implement, optimize, and test the WSN software. Finally, we evaluate and critically assess the proposed concepts. Two case studies are provided to support the evaluation. The first case study, a framework evaluation, is designed to assess the ease at which novice and intermediate users can develop correct and power efficient WSN applications, the portability level achieved by developing applications at a high-level of abstraction, and the estimated overhead due to usage of the framework in terms of the footprint and executable code size of the application. In the second case study, we discuss the design, implementation and optimization of a real-world application named TempSense, where a sensor network is used to monitor the temperature within an area.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
The synthesis and optimization of two Li-ion solid electrolytes were studied in this work. Different combinations of precursors were used to prepare La0.5Li0.5TiO3 via mechanosynthesis. Despite the ability to form a perovskite phase by the mechanochemical reaction it was not possible to obtain a pure La0.5Li0.5TiO3 phase by this process. Of all the seven combinations of precursors and conditions tested, the one where La2O3, Li2CO3 and TiO2 were milled for 480min (LaOLiCO-480) showed the best results, with trace impurity phases still being observed. The main impurity phase was that of La2O3 after mechanosynthesis (22.84%) and Li2TiO3 after calcination (4.20%). Two different sol-gel methods were used to substitute boron on the Zr-site of Li1+xZr2-xBx(PO4)3 or the P-site of Li1+6xZr2(P1-xBxO4)3, with the doping being achieved on the Zr-site using a method adapted from Alamo et al (1989). The results show that the Zr-site is the preferential mechanism for B doping of LiZr2(PO4)3 and not the P-site. Rietveld refinement of the unit-cell parameters was performed and it was verified by consideration of Vegard’s law that it is possible to obtain phase purity up to x = 0.05. This corresponds with the phases present in the XRD data, that showed the additional presence of the low temperature (monoclinic) phase for the powder sintered at 1200ºC for 12h of compositions with x ≥ 0.075. The compositions inside the solid solution undergo the phase transition from triclinic (PDF#01-074-2562) to rhombohedral (PDF#01-070-6734) when heating from 25 to 100ºC, as reported in the literature for the base composition. Despite several efforts, it was not possible to obtain dense pellets and with physical integrity after sintering, requiring further work in order to obtain dense pellets for the electrochemical characterisation of Li Zr2(PO4)3 and Li1.05Zr1.95B0.05(PO4)3.
Resumo:
An extended formulation of a polyhedron P is a linear description of a polyhedron Q together with a linear map π such that π(Q)=P. These objects are of fundamental importance in polyhedral combinatorics and optimization theory, and the subject of a number of studies. Yannakakis’ factorization theorem (Yannakakis in J Comput Syst Sci 43(3):441–466, 1991) provides a surprising connection between extended formulations and communication complexity, showing that the smallest size of an extended formulation of $$P$$P equals the nonnegative rank of its slack matrix S. Moreover, Yannakakis also shows that the nonnegative rank of S is at most 2c, where c is the complexity of any deterministic protocol computing S. In this paper, we show that the latter result can be strengthened when we allow protocols to be randomized. In particular, we prove that the base-2 logarithm of the nonnegative rank of any nonnegative matrix equals the minimum complexity of a randomized communication protocol computing the matrix in expectation. Using Yannakakis’ factorization theorem, this implies that the base-2 logarithm of the smallest size of an extended formulation of a polytope P equals the minimum complexity of a randomized communication protocol computing the slack matrix of P in expectation. We show that allowing randomization in the protocol can be crucial for obtaining small extended formulations. Specifically, we prove that for the spanning tree and perfect matching polytopes, small variance in the protocol forces large size in the extended formulation.