992 resultados para Synthesis Models
Resumo:
This research project is concerned with the design, synthesis and development of new phosphodiesterase 5 (PDE5) inhibitors with improved selectivities and lower toxicities. Two series of a 5 member and a 6 member ring fused heterocyclic compounds were designed, and synthesized. By alteration of starting materials and fragments, two virtual libraries, each is consisted of close to hundred compounds, were obtained successfully. The screening of sexual stimulation activity with rabbits demonstrated both groups of compounds were able to stimulate rabbit penile erection significantly. The following toxicity studies revealed 2-(substituted-sulfonylphenyl)-imidazo [1,5-a]-1,3,5-triazine-4-(3H)-one group possessed an unacceptable toxicity with oral LD50 about 200mg/kg; while 2-(substituted-sulfonylphenyl)-pyrrolo[2,3-d]pyrimidin-4-one group showed an acceptable toxicity with oral LD50 over 2000mg/kg. The continued bioactivity studies showed yonkenafil, the representative of 2-(substituted-sulfonylphenyl)-pyrrolo[2,3-d]pyrimidin-4-one group, has a better selectivity towards PDE5 and PDE6 than sildenafil and a better overall profile of sexual stimulation on animals than sildenafil. Chronic toxicity studies of yonkenafil further confirmed yonkenafil did not cause any serious side effect and damage on animal models and most actions were explainable. Based on evidences of the above studies, yonkenafil were recommended to enter clinical trials by the regulation authority of China, SFDA. Currently yonkenafil has been through the Phase I clinical trials and ready to progress into Phase II. Hopefully, yonkenafil will provide an alternative to the ED patients in the future.
Resumo:
This paper presents a novel prosody model in the context of computer text-to-speech synthesis applications for tone languages. We have demonstrated its applicability using the Standard Yorùbá (SY) language. Our approach is motivated by the theory that abstract and realised forms of various prosody dimensions should be modelled within a modular and unified framework [Coleman, J.S., 1994. Polysyllabic words in the YorkTalk synthesis system. In: Keating, P.A. (Ed.), Phonological Structure and Forms: Papers in Laboratory Phonology III, Cambridge University Press, Cambridge, pp. 293–324]. We have implemented this framework using the Relational Tree (R-Tree) technique. R-Tree is a sophisticated data structure for representing a multi-dimensional waveform in the form of a tree. The underlying assumption of this research is that it is possible to develop a practical prosody model by using appropriate computational tools and techniques which combine acoustic data with an encoding of the phonological and phonetic knowledge provided by experts. To implement the intonation dimension, fuzzy logic based rules were developed using speech data from native speakers of Yorùbá. The Fuzzy Decision Tree (FDT) and the Classification and Regression Tree (CART) techniques were tested in modelling the duration dimension. For practical reasons, we have selected the FDT for implementing the duration dimension of our prosody model. To establish the effectiveness of our prosody model, we have also developed a Stem-ML prosody model for SY. We have performed both quantitative and qualitative evaluations on our implemented prosody models. The results suggest that, although the R-Tree model does not predict the numerical speech prosody data as accurately as the Stem-ML model, it produces synthetic speech prosody with better intelligibility and naturalness. The R-Tree model is particularly suitable for speech prosody modelling for languages with limited language resources and expertise, e.g. African languages. Furthermore, the R-Tree model is easy to implement, interpret and analyse.
Resumo:
In this paper, we present syllable-based duration modelling in the context of a prosody model for Standard Yorùbá (SY) text-to-speech (TTS) synthesis applications. Our prosody model is conceptualised around a modular holistic framework. This framework is implemented using the Relational Tree (R-Tree) techniques. An important feature of our R-Tree framework is its flexibility in that it facilitates the independent implementation of the different dimensions of prosody, i.e. duration, intonation, and intensity, using different techniques and their subsequent integration. We applied the Fuzzy Decision Tree (FDT) technique to model the duration dimension. In order to evaluate the effectiveness of FDT in duration modelling, we have also developed a Classification And Regression Tree (CART) based duration model using the same speech data. Each of these models was integrated into our R-Tree based prosody model. We performed both quantitative (i.e. Root Mean Square Error (RMSE) and Correlation (Corr)) and qualitative (i.e. intelligibility and naturalness) evaluations on the two duration models. The results show that CART models the training data more accurately than FDT. The FDT model, however, shows a better ability to extrapolate from the training data since it achieved a better accuracy for the test data set. Our qualitative evaluation results show that our FDT model produces synthesised speech that is perceived to be more natural than our CART model. In addition, we also observed that the expressiveness of FDT is much better than that of CART. That is because the representation in FDT is not restricted to a set of piece-wise or discrete constant approximation. We, therefore, conclude that the FDT approach is a practical approach for duration modelling in SY TTS applications. © 2006 Elsevier Ltd. All rights reserved.
Resumo:
The nonlinear inverse synthesis (NIS) method, in which information is encoded directly onto the continuous part of the nonlinear signal spectrum, has been proposed recently as a promising digital signal processing technique for combating fiber nonlinearity impairments. However, because the NIS method is based on the integrability property of the lossless nonlinear Schrödinger equation, the original approach can only be applied directly to optical links with ideal distributed Raman amplification. In this paper, we propose and assess a modified scheme of the NIS method, which can be used effectively in standard optical links with lumped amplifiers, such as, erbium-doped fiber amplifiers (EDFAs). The proposed scheme takes into account the average effect of the fiber loss to obtain an integrable model (lossless path-averaged model) to which the NIS technique is applicable. We found that the error between lossless pathaveraged and lossy models increases linearly with transmission distance and input power (measured in dB). We numerically demonstrate the feasibility of the proposed NIS scheme in a burst mode with orthogonal frequency division multiplexing (OFDM) transmission scheme with advanced modulation formats (e.g., QPSK, 16QAM, and 64QAM), showing a performance improvement up to 3.5 dB; these results are comparable to those achievable with multi-step per span digital backpropagation.
Resumo:
Objective: Loss of skeletal muscle is the most debilitating feature of cancer cachexia, and there are few treatments available. The aim of this study was to compare the anticatabolic efficacy of L-leucine and the leucine metabolite β-hydroxy-β-methylbutyrate (Ca-HMB) on muscle protein metabolism, both invitro and invivo. Methods: Studies were conducted in mice bearing the cachexia-inducing murine adenocarcinoma 16 tumor, and in murine C2 C12 myotubes exposed to proteolysis-inducing factor, lipopolysaccharide, and angiotensin II. Results: Both leucine and HMB were found to attenuate the increase in protein degradation and the decrease in protein synthesis in murine myotubes induced by proteolysis-inducing factor, lipopolysaccharide, and angiotensin II. However, HMB was more potent than leucine, because HMB at 50 μM produced essentially the same effect as leucine at 1 mM. Both leucine and HMB reduced the activity of the ubiquitin-proteasome pathway as measured by the functional (chymotrypsin-like) enzyme activity of the proteasome in muscle lysates, as well as Western blot quantitation of protein levels of the structural/enzymatic proteasome subunits (20 S and 19 S) and the ubiquitin ligases (MuRF1 and MAFbx). Invivo studies in mice bearing the murine adenocarcinoma 16 tumor showed a low dose of Ca-HMB (0.25 g/kg) tobe 60% more effective than leucine (1 g/kg) in attenuating loss of body weight over a 4-d period. Conclusion: These results favor the clinical feasibility of using Ca-HMB over high doses of leucine for the treatment of cancer cachexia. © 2014 Elsevier Inc.
Resumo:
Az 1970-es évek olajválságait követő stagflációs periódusok óta gyakorlatilag minden nagyobb áremelkedés alkalmával felerősödnek a kedvezőtlen makrogazdasági hatásokkal kapcsolatos félelmek, miközben a tapasztalat azt mutatja, hogy az importőröket egyre kevésbé érinti az olaj reálárának alakulása. A gyengülő hatások okaként Blanchard-Galí [2007] a gazdaságok hatékonyabb és rugalmasabb működését jelölte meg, míg Kilian [2010] szerint a 2000 utáni áremelkedést a kedvező világgazdasági környezet fűtötte, ami ellensúlyozta a magasabb ár okozta negatív folyamatokat. A tanulmány Kilian [2009] modelljének kiterjesztésével, időben változó paraméterű ökonometriai eljárással vizsgálja a két megközelítés összeegyeztethetőségét. Az eredmények a hipotézisek egymást kiegészítő kapcsolatára engednek következtetni, azaz a makrogazdasági következmények szempontjából nem maga az ár, hanem annak kiváltó okai lényegesek, ugyanakkor e mögöttes tényezők hatása az elmúlt évtizedekben folyamatosan változott. _____ Many economists argue that the stagflation periods of the 1970s were related to the two main oil crises. However, experience shows that these effects were eliminated over the decades, e. g. oil-importing economies enjoyed solid growth and low inflation when oil prices surged in the 2000s. Blanchard and Galí (2007) found that economies became more effective and elastic in handling high energy prices, while Kilian (2010) took as the main reason for the weakening macroeconomic effects of oil-price shocks the structural differences behind the price changes. The article sets out to test the compatibility of the two rival theories, using time-varying parameter models. The results show that both hypotheses can be correct concurrently: the structure of the change in price matters, but the impulse responses varied over time.
Resumo:
Software engineering researchers are challenged to provide increasingly more powerful levels of abstractions to address the rising complexity inherent in software solutions. One new development paradigm that places models as abstraction at the forefront of the development process is Model-Driven Software Development (MDSD). MDSD considers models as first class artifacts, extending the capability for engineers to use concepts from the problem domain of discourse to specify apropos solutions. A key component in MDSD is domain-specific modeling languages (DSMLs) which are languages with focused expressiveness, targeting a specific taxonomy of problems. The de facto approach used is to first transform DSML models to an intermediate artifact in a HLL e.g., Java or C++, then execute that resulting code.^ Our research group has developed a class of DSMLs, referred to as interpreted DSMLs (i-DSMLs), where models are directly interpreted by a specialized execution engine with semantics based on model changes at runtime. This execution engine uses a layered architecture and is referred to as a domain-specific virtual machine (DSVM). As the domain-specific model being executed descends the layers of the DSVM the semantic gap between the user-defined model and the services being provided by the underlying infrastructure is closed. The focus of this research is the synthesis engine, the layer in the DSVM which transforms i-DSML models into executable scripts for the next lower layer to process.^ The appeal of an i-DSML is constrained as it possesses unique semantics contained within the DSVM. Existing DSVMs for i-DSMLs exhibit tight coupling between the implicit model of execution and the semantics of the domain, making it difficult to develop DSVMs for new i-DSMLs without a significant investment in resources.^ At the onset of this research only one i-DSML had been created for the user- centric communication domain using the aforementioned approach. This i-DSML is the Communication Modeling Language (CML) and its DSVM is the Communication Virtual machine (CVM). A major problem with the CVM's synthesis engine is that the domain-specific knowledge (DSK) and the model of execution (MoE) are tightly interwoven consequently subsequent DSVMs would need to be developed from inception with no reuse of expertise.^ This dissertation investigates how to decouple the DSK from the MoE and subsequently producing a generic model of execution (GMoE) from the remaining application logic. This GMoE can be reused to instantiate synthesis engines for DSVMs in other domains. The generalized approach to developing the model synthesis component of i-DSML interpreters utilizes a reusable framework loosely coupled to DSK as swappable framework extensions.^ This approach involves first creating an i-DSML and its DSVM for a second do- main, demand-side smartgrid, or microgrid energy management, and designing the synthesis engine so that the DSK and MoE are easily decoupled. To validate the utility of the approach, the SEs are instantiated using the GMoE and DSKs of the two aforementioned domains and an empirical study to support our claim of reduced developmental effort is performed.^
Resumo:
The Last Interglacial (LIG, 129-116 thousand of years BP, ka) represents a test bed for climate model feedbacks in warmer-than-present high latitude regions. However, mainly because aligning different palaeoclimatic archives and from different parts of the world is not trivial, a spatio-temporal picture of LIG temperature changes is difficult to obtain. Here, we have selected 47 polar ice core and sub-polar marine sediment records and developed a strategy to align them onto the recent AICC2012 ice core chronology. We provide the first compilation of high-latitude temperature changes across the LIG associated with a coherent temporal framework built between ice core and marine sediment records. Our new data synthesis highlights non-synchronous maximum temperature changes between the two hemispheres with the Southern Ocean and Antarctica records showing an early warming compared to North Atlantic records. We also observe warmer than present-day conditions that occur for a longer time period in southern high latitudes than in northern high latitudes. Finally, the amplitude of temperature changes at high northern latitudes is larger compared to high southern latitude temperature changes recorded at the onset and the demise of the LIG. We have also compiled four data-based time slices with temperature anomalies (compared to present-day conditions) at 115 ka, 120 ka, 125 ka and 130 ka and quantitatively estimated temperature uncertainties that include relative dating errors. This provides an improved benchmark for performing more robust model-data comparison. The surface temperature simulated by two General Circulation Models (CCSM3 and HadCM3) for 130 ka and 125 ka is compared to the corresponding time slice data synthesis. This comparison shows that the models predict warmer than present conditions earlier than documented in the North Atlantic, while neither model is able to produce the reconstructed early Southern Ocean and Antarctic warming. Our results highlight the importance of producing a sequence of time slices rather than one single time slice averaging the LIG climate conditions.
Resumo:
Uncertainty quantification (UQ) is both an old and new concept. The current novelty lies in the interactions and synthesis of mathematical models, computer experiments, statistics, field/real experiments, and probability theory, with a particular emphasize on the large-scale simulations by computer models. The challenges not only come from the complication of scientific questions, but also from the size of the information. It is the focus in this thesis to provide statistical models that are scalable to massive data produced in computer experiments and real experiments, through fast and robust statistical inference.
Chapter 2 provides a practical approach for simultaneously emulating/approximating massive number of functions, with the application on hazard quantification of Soufri\`{e}re Hills volcano in Montserrate island. Chapter 3 discusses another problem with massive data, in which the number of observations of a function is large. An exact algorithm that is linear in time is developed for the problem of interpolation of Methylation levels. Chapter 4 and Chapter 5 are both about the robust inference of the models. Chapter 4 provides a new criteria robustness parameter estimation criteria and several ways of inference have been shown to satisfy such criteria. Chapter 5 develops a new prior that satisfies some more criteria and is thus proposed to use in practice.
Resumo:
Software engineering researchers are challenged to provide increasingly more pow- erful levels of abstractions to address the rising complexity inherent in software solu- tions. One new development paradigm that places models as abstraction at the fore- front of the development process is Model-Driven Software Development (MDSD). MDSD considers models as first class artifacts, extending the capability for engineers to use concepts from the problem domain of discourse to specify apropos solutions. A key component in MDSD is domain-specific modeling languages (DSMLs) which are languages with focused expressiveness, targeting a specific taxonomy of problems. The de facto approach used is to first transform DSML models to an intermediate artifact in a HLL e.g., Java or C++, then execute that resulting code. Our research group has developed a class of DSMLs, referred to as interpreted DSMLs (i-DSMLs), where models are directly interpreted by a specialized execution engine with semantics based on model changes at runtime. This execution engine uses a layered architecture and is referred to as a domain-specific virtual machine (DSVM). As the domain-specific model being executed descends the layers of the DSVM the semantic gap between the user-defined model and the services being provided by the underlying infrastructure is closed. The focus of this research is the synthesis engine, the layer in the DSVM which transforms i-DSML models into executable scripts for the next lower layer to process. The appeal of an i-DSML is constrained as it possesses unique semantics contained within the DSVM. Existing DSVMs for i-DSMLs exhibit tight coupling between the implicit model of execution and the semantics of the domain, making it difficult to develop DSVMs for new i-DSMLs without a significant investment in resources. At the onset of this research only one i-DSML had been created for the user- centric communication domain using the aforementioned approach. This i-DSML is the Communication Modeling Language (CML) and its DSVM is the Communication Virtual machine (CVM). A major problem with the CVM’s synthesis engine is that the domain-specific knowledge (DSK) and the model of execution (MoE) are tightly interwoven consequently subsequent DSVMs would need to be developed from inception with no reuse of expertise. This dissertation investigates how to decouple the DSK from the MoE and sub- sequently producing a generic model of execution (GMoE) from the remaining appli- cation logic. This GMoE can be reused to instantiate synthesis engines for DSVMs in other domains. The generalized approach to developing the model synthesis com- ponent of i-DSML interpreters utilizes a reusable framework loosely coupled to DSK as swappable framework extensions. This approach involves first creating an i-DSML and its DSVM for a second do- main, demand-side smartgrid, or microgrid energy management, and designing the synthesis engine so that the DSK and MoE are easily decoupled. To validate the utility of the approach, the SEs are instantiated using the GMoE and DSKs of the two aforementioned domains and an empirical study to support our claim of reduced developmental effort is performed.
Resumo:
The recently proposed global monsoon hypothesis interprets monsoon systems as part of one global-scale atmospheric overturning circulation, implying a connection between the regional monsoon systems and an in-phase behaviour of all northern hemispheric monsoons on annual timescales (Trenberth et al., 2000). Whether this concept can be applied to past climates and variability on longer timescales is still under debate, because the monsoon systems exhibit different regional characteristics such as different seasonality (i.e. onset, peak, and withdrawal). To investigate the interconnection of different monsoon systems during the pre-industrial Holocene, five transient global climate model simulations have been analysed with respect to the rainfall trend and variability in different sub-domains of the Afro-Asian monsoon region. Our analysis suggests that on millennial timescales with varying orbital forcing, the monsoons do not behave as a tightly connected global system. According to the models, the Indian and North African monsoons are coupled, showing similar rainfall trend and moderate correlation in rainfall variability in all models. The East Asian monsoon changes independently during the Holocene. The dissimilarities in the seasonality of the monsoon sub-systems lead to a stronger response of the North African and Indian monsoon systems to the Holocene insolation forcing than of the East Asian monsoon and affect the seasonal distribution of Holocene rainfall variations. Within the Indian and North African monsoon domain, precipitation solely changes during the summer months, showing a decreasing Holocene precipitation trend. In the East Asian monsoon region, the precipitation signal is determined by an increasing precipitation trend during spring and a decreasing precipitation change during summer, partly balancing each other. A synthesis of reconstructions and the model results do not reveal an impact of the different seasonality on the timing of the Holocene rainfall optimum in the different sub-monsoon systems. They rather indicate locally inhomogeneous rainfall changes and show, that single palaeo-records should not be used to characterise the rainfall change and monsoon evolution for entire monsoon sub-systems.
Resumo:
Using original data on 1,5000 mandibles, but mainly previously published data, I present a overview of the distribution characteristics of mandibular torus and a hypothesis concerning its cause. Pedigree studies have established that genetic factors influence torus development. Extrinsic factors are strongly implicated by other evidence: prevalence among Arctic peoples, effect of dietary change, age regression, preponderance in males and on the right side, effect of cranial deformation, concurrence with palatine torus and maxillary alveolar exostoses, and clinical evidence. I propose that the primary factor is masticatory stress. According to a mechanism suggested by orthodontic research, the horizontal component of bite force tips the lower canine, premolars and first molar so that their root apices exert pressure on the periodontal membrane, causing formation of new bone on the lingual cortical plate of the alveolar process. Thus formed, the hyperostosis is vulnerable to trauma and its periosteal covering becomes bruised causing additional deposition of bone. Genes influence torus indirectly through their effect on occlusion. A patern of increased expressivity with incidence suggests that a quasicontinuous model may provide a better fit to pedigree data than single locus models previously tested.
Resumo:
Field-programmable gate arrays are ideal hosts to custom accelerators for signal, image, and data processing but de- mand manual register transfer level design if high performance and low cost are desired. High-level synthesis reduces this design burden but requires manual design of complex on-chip and off-chip memory architectures, a major limitation in applications such as video processing. This paper presents an approach to resolve this shortcoming. A constructive process is described that can derive such accelerators, including on- and off-chip memory storage from a C description such that a user-defined throughput constraint is met. By employing a novel statement-oriented approach, dataflow intermediate models are derived and used to support simple ap- proaches for on-/off-chip buffer partitioning, derivation of custom on-chip memory hierarchies and architecture transformation to ensure user-defined throughput constraints are met with minimum cost. When applied to accelerators for full search motion estima- tion, matrix multiplication, Sobel edge detection, and fast Fourier transform, it is shown how real-time performance up to an order of magnitude in advance of existing commercial HLS tools is enabled whilst including all requisite memory infrastructure. Further, op- timizations are presented that reduce the on-chip buffer capacity and physical resource cost by up to 96% and 75%, respectively, whilst maintaining real-time performance.
Resumo:
Symbolic execution is a powerful program analysis technique, but it is very challenging to apply to programs built using event-driven frameworks, such as Android. The main reason is that the framework code itself is too complex to symbolically execute. The standard solution is to manually create a framework model that is simpler and more amenable to symbolic execution. However, developing and maintaining such a model by hand is difficult and error-prone. We claim that we can leverage program synthesis to introduce a high-degree of automation to the process of framework modeling. To support this thesis, we present three pieces of work. First, we introduced SymDroid, a symbolic executor for Android. While Android apps are written in Java, they are compiled to Dalvik bytecode format. Instead of analyzing an app’s Java source, which may not be available, or decompiling from Dalvik back to Java, which requires significant engineering effort and introduces yet another source of potential bugs in an analysis, SymDroid works directly on Dalvik bytecode. Second, we introduced Pasket, a new system that takes a first step toward automatically generating Java framework models to support symbolic execution. Pasket takes as input the framework API and tutorial programs that exercise the framework. From these artifacts and Pasket's internal knowledge of design patterns, Pasket synthesizes an executable framework model by instantiating design patterns, such that the behavior of a synthesized model on the tutorial programs matches that of the original framework. Lastly, in order to scale program synthesis to framework models, we devised adaptive concretization, a novel program synthesis algorithm that combines the best of the two major synthesis strategies: symbolic search, i.e., using SAT or SMT solvers, and explicit search, e.g., stochastic enumeration of possible solutions. Adaptive concretization parallelizes multiple sub-synthesis problems by partially concretizing highly influential unknowns in the original synthesis problem. Thanks to adaptive concretization, Pasket can generate a large-scale model, e.g., thousands lines of code. In addition, we have used an Android model synthesized by Pasket and found that the model is sufficient to allow SymDroid to execute a range of apps.
Resumo:
Transdisciplinarity gained importance in the 1970s, with the initial signs of weakness of both multi- and interdisciplinary approaches. This weakness was felt due to the increased complexity in the social and technological landscapes. Generally, discussion over the transdisciplinary topic is centred in social and health sciences. Therefore, the major challenge in this research is to adapt design research to the emerging transdisciplinary discussion. Based on a comparative and critical review of several engineering and design models for the design process, we advocate the importance of collaboration and conceptualisation for these disciplines. Therefore, a transdisciplinary and conceptual cooperation between engineering and industrial design disciplines is considered as decisive to create breakthroughs. Furthermore, a synthesis is proposed, in order to foster the cooperation between engineering and industrial design.