929 resultados para Based structure model
Resumo:
Electrical energy storage is a really important issue nowadays. As electricity is not easy to be directly stored, it can be stored in other forms and converted back to electricity when needed. As a consequence, storage technologies for electricity can be classified by the form of storage, and in particular we focus on electrochemical energy storage systems, better known as electrochemical batteries. Largely the more widespread batteries are the Lead-Acid ones, in the two main types known as flooded and valve-regulated. Batteries need to be present in many important applications such as in renewable energy systems and in motor vehicles. Consequently, in order to simulate these complex electrical systems, reliable battery models are needed. Although there exist some models developed by experts of chemistry, they are too complex and not expressed in terms of electrical networks. Thus, they are not convenient for a practical use by electrical engineers, who need to interface these models with other electrical systems models, usually described by means of electrical circuits. There are many techniques available in literature by which a battery can be modeled. Starting from the Thevenin based electrical model, it can be adapted to be more reliable for Lead-Acid battery type, with the addition of a parasitic reaction branch and a parallel network. The third-order formulation of this model can be chosen, being a trustworthy general-purpose model, characterized by a good ratio between accuracy and complexity. Considering the equivalent circuit network, all the useful equations describing the battery model are discussed, and then implemented one by one in Matlab/Simulink. The model has been finally validated, and then used to simulate the battery behaviour in different typical conditions.
Resumo:
The phenomenon of portfolio entrepreneurship has attracted considerable scholarly attention and is particularly relevant in the family fi rm context. However, there is a lack of knowledge of the process through which portfolio entrepreneurship develops in family firms. We address this gap by analyzing four in-depth, longitudinal family firm case studies from Europe and Latin America. Using a resource-based perspective, we identify six distinct resource categories that are relevant to the portfolio entrepreneurship process. Furthermore, we reveal that their importance varies across time. Our resulting resource-based process model of portfolio entrepreneurship in family firms makes valuable contributions to both theory and practice.
Resumo:
An appreciation of the physical mechanisms which cause observed seismicity complexity is fundamental to the understanding of the temporal behaviour of faults and single slip events. Numerical simulation of fault slip can provide insights into fault processes by allowing exploration of parameter spaces which influence microscopic and macroscopic physics of processes which may lead towards an answer to those questions. Particle-based models such as the Lattice Solid Model have been used previously for the simulation of stick-slip dynamics of faults, although mainly in two dimensions. Recent increases in the power of computers and the ability to use the power of parallel computer systems have made it possible to extend particle-based fault simulations to three dimensions. In this paper a particle-based numerical model of a rough planar fault embedded between two elastic blocks in three dimensions is presented. A very simple friction law without any rate dependency and no spatial heterogeneity in the intrinsic coefficient of friction is used in the model. To simulate earthquake dynamics the model is sheared in a direction parallel to the fault plane with a constant velocity at the driving edges. Spontaneous slip occurs on the fault when the shear stress is large enough to overcome the frictional forces on the fault. Slip events with a wide range of event sizes are observed. Investigation of the temporal evolution and spatial distribution of slip during each event shows a high degree of variability between the events. In some of the larger events highly complex slip patterns are observed.
Resumo:
Motivation: While processing of MHC class II antigens for presentation to helper T-cells is essential for normal immune response, it is also implicated in the pathogenesis of autoimmune disorders and hypersensitivity reactions. Sequence-based computational techniques for predicting HLA-DQ binding peptides have encountered limited success, with few prediction techniques developed using three-dimensional models. Methods: We describe a structure-based prediction model for modeling peptide-DQ3.2 beta complexes. We have developed a rapid and accurate protocol for docking candidate peptides into the DQ3.2 beta receptor and a scoring function to discriminate binders from the background. The scoring function was rigorously trained, tested and validated using experimentally verified DQ3.2 beta binding and non-binding peptides obtained from biochemical and functional studies. Results: Our model predicts DQ3.2 beta binding peptides with high accuracy [area under the receiver operating characteristic (ROC) curve A(ROC) > 0.90], compared with experimental data. We investigated the binding patterns of DQ3.2 beta peptides and illustrate that several registers exist within a candidate binding peptide. Further analysis reveals that peptides with multiple registers occur predominantly for high-affinity binders.
Resumo:
The ERS-1 satellite carries a scatterometer which measures the amount of radiation scattered back toward the satellite by the ocean's surface. These measurements can be used to infer wind vectors. The implementation of a neural network based forward model which maps wind vectors to radar backscatter is addressed. Input noise cannot be neglected. To account for this noise, a Bayesian framework is adopted. However, Markov Chain Monte Carlo sampling is too computationally expensive. Instead, gradient information is used with a non-linear optimisation algorithm to find the maximum em a posteriori probability values of the unknown variables. The resulting models are shown to compare well with the current operational model when visualised in the target space.
Resumo:
The ERS-1 satellite carries a scatterometer which measures the amount of radiation scattered back toward the satellite by the ocean's surface. These measurements can be used to infer wind vectors. The implementation of a neural network based forward model which maps wind vectors to radar backscatter is addressed. Input noise cannot be neglected. To account for this noise, a Bayesian framework is adopted. However, Markov Chain Monte Carlo sampling is too computationally expensive. Instead, gradient information is used with a non-linear optimisation algorithm to find the maximum em a posteriori probability values of the unknown variables. The resulting models are shown to compare well with the current operational model when visualised in the target space.
Resumo:
This paper presents a simple profitability-based decision model to show how synergistic gains generated by the joint adoption of complementary innovations may influence the firm's adoption decision. For this purpose a weighted index of intra-firm diffusion is built to investigate empirically the drivers of the intensity of joint use of a set of complementary innovations. The findings indicate that establishment size, ownership structure and product market concentration are important determinants of the intensity of use. Interestingly, the factors that affect the extent of use of technological innovations do also affect that of clusters of management practices. However, they can explain only part of the heterogeneity of the benefits from joint use.
Resumo:
Many planning and control tools, especially network analysis, have been developed in the last four decades. The majority of them were created in military organization to solve the problem of planning and controlling research and development projects. The original version of the network model (i.e. C.P.M/PERT) was transplanted to the construction industry without the consideration of the special nature and environment of construction projects. It suited the purpose of setting up targets and defining objectives, but it failed in satisfying the requirement of detailed planning and control at the site level. Several analytical and heuristic rules based methods were designed and combined with the structure of C.P.M. to eliminate its deficiencies. None of them provides a complete solution to the problem of resource, time and cost control. VERT was designed to deal with new ventures. It is suitable for project evaluation at the development stage. CYCLONE, on the other hand, is concerned with the design and micro-analysis of the production process. This work introduces an extensive critical review of the available planning techniques and addresses the problem of planning for site operation and control. Based on the outline of the nature of site control, this research developed a simulation based network model which combines part of the logics of both VERT and CYCLONE. Several new nodes were designed to model the availability and flow of resources, the overhead and operating cost and special nodes for evaluating time and cost. A large software package is written to handle the input, the simulation process and the output of the model. This package is designed to be used on any microcomputer using MS-DOS operating system. Data from real life projects were used to demonstrate the capability of the technique. Finally, a set of conclusions are drawn regarding the features and limitations of the proposed model, and recommendations for future work are outlined at the end of this thesis.
Resumo:
In this paper a Markov chain based analytical model is proposed to evaluate the slotted CSMA/CA algorithm specified in the MAC layer of IEEE 802.15.4 standard. The analytical model consists of two two-dimensional Markov chains, used to model the state transition of an 802.15.4 device, during the periods of a transmission and between two consecutive frame transmissions, respectively. By introducing the two Markov chains a small number of Markov states are required and the scalability of the analytical model is improved. The analytical model is used to investigate the impact of the CSMA/CA parameters, the number of contending devices, and the data frame size on the network performance in terms of throughput and energy efficiency. It is shown by simulations that the proposed analytical model can accurately predict the performance of slotted CSMA/CA algorithm for uplink, downlink and bi-direction traffic, with both acknowledgement and non-acknowledgement modes.
Resumo:
Developmental neurotoxicity is a major issue in human health and may have lasting neurological implications. In this preliminary study we exposed differentiating Ntera2/clone D1 (NT2/D1) cell neurospheres to known human teratogens classed as non-embryotoxic (acrylamide), weakly embryotoxic (lithium, valproic acid) and strongly embryotoxic (hydroxyurea) as listed by European Centre for the Validation of Alternative Methods (ECVAM) and examined endpoints of cell viability and neuronal protein marker expression specific to the central nervous system, to identify developmental neurotoxins. Following induction of neuronal differentiation, valproic acid had the most significant effect on neurogenesis, in terms of reduced viability and decreased neuronal markers. Lithium had least effect on viability and did not significantly alter the expression of neuronal markers. Hydroxyurea significantly reduced cell viability but did not affect neuronal protein marker expression. Acrylamide reduced neurosphere viability but did not affect neuronal protein marker expression. Overall, this NT2/D1 -based neurosphere model of neurogenesis, may provide the basis for a model of developmental neurotoxicity in vitro.
Resumo:
The small intestine poses a major barrier to the efficient absorption of orally administered therapeutics. Intestinal epithelial cells are an extremely important site for extrahepatic clearance, primarily due to prominent P-glycoprotein-mediated active efflux and the presence of cytochrome P450s. We describe a physiologically based pharmacokinetic model which incorporates geometric variations, pH alterations and descriptions of the abundance and distribution of cytochrome 3A and P-glycoprotein along the length of the small intestine. Simulations using preclinical in vitro data for model drugs were performed to establish the influence of P-glycoprotein efflux, cytochrome 3A metabolism and passive permeability on drug available for absorption within the enterocytes. The fraction of drug escaping the enterocyte (F(G)) for 10 cytochrome 3A substrates with a range of intrinsic metabolic clearances were simulated. Following incorporation of P-glycoprotein in vitro efflux ratios all predicted F(G) values were within 20% of observed in vivo F(G). The presence of P-glycoprotein increased the level of cytochrome 3A drug metabolism by up to 12-fold in the distal intestine. F(G) was highly sensitive to changes in intrinsic metabolic clearance but less sensitive to changes in intestinal drug permeability. The model will be valuable for quantifying aspects of intestinal drug absorption and distribution.
Resumo:
To guarantee QoS for multicast transmission, admission control for multicast sessions is expected. Probe-based multicast admission control (PBMAC) scheme is a scalable and simple approach. However, PBMAC suffers from the subsequent request problem which can significantly reduce the maximum number of multicast sessions that a network can admit. In this letter, we describe the subsequent request problem and propose an enhanced PBMAC scheme to solve this problem. The enhanced scheme makes use of complementary probing and remarking which require only minor modification to the original scheme. By using a fluid-based analytical model, we are able to prove that the enhanced scheme can always admit a higher number of multicast sessions. Furthermore, we present validation of the analytical model using packet based simulation. Copyright © 2005 The Institute of Electronics, Information and Communication Engineers.
Resumo:
There have been multifarious approaches in building expert knowledge in medical or engineering field through expert system, case-based reasoning, model-based reasoning and also a large-scale knowledge-based system. The intriguing factors with these approaches are mainly the choices of reasoning mechanism, ontology, knowledge representation, elicitation and modeling. In our study, we argue that the knowledge construction through hypermedia-based community channel is an effective approach in constructing expert’s knowledge. We define that the knowledge can be represented as in the simplest form such as stories to the most complex ones such as on-the-job type of experiences. The current approaches of encoding experiences require expert’s knowledge to be acquired and represented in rules, cases or causal model. We differentiate the two types of knowledge which are the content knowledge and socially-derivable knowledge. The latter is described as knowledge that is earned through social interaction. Intelligent Conversational Channel is the system that supports the building and sharing on this type of knowledge.
Resumo:
Report published in the Proceedings of the National Conference on "Education and Research in the Information Society", Plovdiv, May, 2016
Resumo:
Project-based firms currently follow an organizational structure whereby all projects are dealt with using a functionalist perspective which is integrated with projects so as to support a project-based structure. Project-based firms are increasingly moving towards the realization that innovation management is an integral part of any organizational strategy and the same is true for project-based firms. Moreover, the current body of knowledge regarding project-based firms does not incorporate any findings regarding the integration or use of innovation management in project management. As a result, it becomes important to research organizations to see how innovation management is applied in organizations and what the perspective is regarding innovation in organizations. Secondly, the question of whether slack resources can contribute to higher levels of innovation must also be researched. It has been a longstanding viewpoint that a lack of resources or limited resources results in higher levels of innovation. This study analyzes these two main viewpoints using qualitative analysis of 12 firms. The findings add to the current literature on innovation in organizations and project based firms while expanding the knowledge on innovation.