821 resultados para Reward based model
Resumo:
In this second counterpoint article, we refute the claims of Landy, Locke, and Conte, and make the more specific case for our perspective, which is that ability-based models of emotional intelligence have value to add in the domain of organizational psychology. In this article, we address remaining issues, such as general concerns about the tenor and tone of the debates on this topic, a tendency for detractors to collapse across emotional intelligence models when reviewing the evidence and making judgments, and subsequent penchant to thereby discount all models, including the ability-based one, as lacking validity. We specifically refute the following three claims from our critics with the most recent empirically based evidence: (1) emotional intelligence is dominated by opportunistic academics-turned-consultants who have amassed much fame and fortune based on a concept that is shabby science at best; (2) the measurement of emotional intelligence is grounded in unstable, psychometrically flawed instruments, which have not demonstrated appropriate discriminant and predictive validity to warrant/justify their use; and (3) there is weak empirical evidence that emotional intelligence is related to anything of importance in organizations. We thus end with an overview of the empirical evidence supporting the role of emotional intelligence in organizational and social behavior.
Resumo:
An energy-based swing hammer mill model has been developed for coke oven feed preparation. it comprises a mechanistic power model to determine the dynamic internal recirculation and a perfect mixing mill model with a dual-classification function to mimic the operations of crusher and screen. The model parameters were calibrated using a pilot-scale swing hammer mill at various operating conditions. The effects of the underscreen configurations and the feed sizes on hammer mill operations were demonstrated through the fitted model parameters. Relationships between the model parameters and the machine configurations were established. The model was validated using the independent experimental data of single lithotype coal tests with the same BJD pilot-scale hammer mill and full operation audit data of an industrial hammer mill. The outcome of the energy-based swing hammer mill model is the capability to simulate the impact of changing blends of coal or mill configurations and operating conditions on product size distribution. Alternatively, the model can be used to select the machine settings required to achieve a desired product. (C) 2003 Elsevier Science B.V. All rights reserved.
Resumo:
Involving groups in important management processes such as decision making has several advantages. By discussing and combining ideas, counter ideas, critical opinions, identified constraints, and alternatives, a group of individuals can test potentially better solutions, sometimes in the form of new products, services, and plans. In the past few decades, operations research, AI, and computer science have had tremendous success creating software systems that can achieve optimal solutions, even for complex problems. The only drawback is that people don’t always agree with these solutions. Sometimes this dissatisfaction is due to an incorrect parameterization of the problem. Nevertheless, the reasons people don’t like a solution might not be quantifiable, because those reasons are often based on aspects such as emotion, mood, and personality. At the same time, monolithic individual decisionsupport systems centered on optimizing solutions are being replaced by collaborative systems and group decision-support systems (GDSSs) that focus more on establishing connections between people in organizations. These systems follow a kind of social paradigm. Combining both optimization- and socialcentered approaches is a topic of current research. However, even if such a hybrid approach can be developed, it will still miss an essential point: the emotional nature of group participants in decision-making tasks. We’ve developed a context-aware emotion based model to design intelligent agents for group decision-making processes. To evaluate this model, we’ve incorporated it in an agent-based simulator called ABS4GD (Agent-Based Simulation for Group Decision), which we developed. This multiagent simulator considers emotion- and argument based factors while supporting group decision-making processes. Experiments show that agents endowed with emotional awareness achieve agreements more quickly than those without such awareness. Hence, participant agents that integrate emotional factors in their judgments can be more successful because, in exchanging arguments with other agents, they consider the emotional nature of group decision making.
Resumo:
This paper discusses the increased need to support dynamic task-level parallelism in embedded real-time systems and proposes a Java framework that combines the Real-Time Specification for Java (RTSJ) with the Fork/Join (FJ) model, following a fixed priority-based scheduling scheme. Our work intends to support parallel runtimes that will coexist with a wide range of other complex independently developed applications, without any previous knowledge about their real execution requirements, number of parallel sub-tasks, and when those sub-tasks will be generated.
Resumo:
Dissertation to obtain a Master Degree in Biotechnology
Resumo:
This paper presents the main features of finite element FE numerical model developed using the computer code FEMIX to predict the near-surface mounted NSM carbon-fiber-reinforced polymer CFRP rods shear repair contribution to corroded reinforced concrete RC beams. In the RC beams shear repaired with NSM technique, the Carbon Fibre Reinforced Polymer (CFRP) rods are placed inside pre-cut grooves onto the concrete cover of the RC beam’s lateral faces and are bonded to the concrete with high epoxy adhesive. Experimental and 3D numerical modelling results are presented in this paper in terms of load-deflection curves, and failure modes for 4 short corroded beams: two corroded beams (A1CL3-B and A1CL3-SB) and two control beams (A1T-B and A1T-SB), the beams noted with B were let repaired in bending only with NSM CFRP rods while the ones noted with SB were repaired in both bending and shear with NSM technique. The corrosion of the tensile steel bars and its effect on the shear capacity of the RC beams was discussed. Results showed that the FE model was able to capture the main aspects of the experimental load-deflection curves of the RC beams, moreover it has presented the experimental failure modes and FE numerical modelling crack patterns and both gave similar results for non-shear repaired beams which failed in diagonal tension mode of failure and for shear-repaired beams which failed due to large flexural crack at the middle of the beams along with the concrete crushing, three dimensional crack patterns were produced for shear-repaired beams in order to investigate the splitting cracks occurred at the middle of the beams and near the support.
Resumo:
This article describes the main approaches adopted in a study focused on planning industrial estates on a sub-regional scale. The study was supported by an agent-based model, using firms as agents to assess the attractiveness of industrial estates. The simulation was made by the NetLogo toolkit and the environment represents a geographical space. Three scenarios and four hypotheses were used in the simulation to test the impact of different policies on the attractiveness of industrial estates. Policies were distinguished by the level of municipal coordination at which they were implemented and by the type of intervention. In the model, the attractiveness of industrial estates was based on the level of facilities, amenities, accessibility and on the price of land in each industrial estate. Firms are able to move and relocate whenever they find an attractive estate. The relocating firms were selected by their size, location and distance to an industrial estate. Results show that a coordinated policy among municipalities is the most efficient policy to promote advanced-qualified estates. In these scenarios, it was observed that more industrial estates became attractive, more firms were relocated and more vacant lots were occupied. Furthermore, the results also indicate that the promotion of widespread industrial estates with poor-quality infrastructures and amenities is an inefficient policy to attract firms.
Resumo:
This paper presents the main features of finite element FE numerical model developed using the computer code FEMIX to predict the near-surface mounted NSM carbon-fiber-reinforced polymer CFRP rods shear repair contribution to corroded reinforced concrete RC beams. In the RC beams shear repaired with NSM technique, the Carbon Fibre Reinforced Polymer (CFRP) rods are placed inside pre-cut grooves onto the concrete cover of the RC beam’s lateral faces and are bonded to the concrete with high epoxy adhesive. Experimental and 3D numerical modelling results are presented in this paper in terms of load-deflection curves, failure modes and slip information of the tensile steel bars for 4 short corroded beams: two corroded beams (A1CL3-B and A1CL3-SB) and two control beams (A1T-B and A1T-SB), the beams noted with B were let repaired in bending only with NSM CFRP rods while the ones noted with SB were repaired in both bending and shear with NSM technique. The corrosion of the tensile steel bars and its effect on the shear capacity of the RC beams was discussed. Results showed that the FE model was able to capture the main aspects of the experimental load-deflection curves of the RC beams, moreover it has presented the experimental failure modes and FE numerical modelling crack patterns and both gave similar results for non-shear repaired beams which failed in diagonal tension mode of failure and for shear-repaired beams which failed due to large flexural crack at the middle of the beams along with the concrete crushing, three dimensional crack patterns were produced for shear-repaired beams in order to investigate the splitting cracks occurred at the middle of the beams and near the support.
Resumo:
Angiogenesis, the formation of new blood vessels sprouting from existing ones, occurs in several situations like wound healing, tissue remodeling, and near growing tumors. Under hypoxic conditions, tumor cells secrete growth factors, including VEGF. VEGF activates endothelial cells (ECs) in nearby vessels, leading to the migration of ECs out of the vessel and the formation of growing sprouts. A key process in angiogenesis is cellular self-organization, and previous modeling studies have identified mechanisms for producing networks and sprouts. Most theoretical studies of cellular self-organization during angiogenesis have ignored the interactions of ECs with the extra-cellular matrix (ECM), the jelly or hard materials that cells live in. Apart from providing structural support to cells, the ECM may play a key role in the coordination of cellular motility during angiogenesis. For example, by modifying the ECM, ECs can affect the motility of other ECs, long after they have left. Here, we present an explorative study of the cellular self-organization resulting from such ECM-coordinated cell migration. We show that a set of biologically-motivated, cell behavioral rules, including chemotaxis, haptotaxis, haptokinesis, and ECM-guided proliferation suffice for forming sprouts and branching vascular trees.
Resumo:
With the advancement of high-throughput sequencing and dramatic increase of available genetic data, statistical modeling has become an essential part in the field of molecular evolution. Statistical modeling results in many interesting discoveries in the field, from detection of highly conserved or diverse regions in a genome to phylogenetic inference of species evolutionary history Among different types of genome sequences, protein coding regions are particularly interesting due to their impact on proteins. The building blocks of proteins, i.e. amino acids, are coded by triples of nucleotides, known as codons. Accordingly, studying the evolution of codons leads to fundamental understanding of how proteins function and evolve. The current codon models can be classified into three principal groups: mechanistic codon models, empirical codon models and hybrid ones. The mechanistic models grasp particular attention due to clarity of their underlying biological assumptions and parameters. However, they suffer from simplified assumptions that are required to overcome the burden of computational complexity. The main assumptions applied to the current mechanistic codon models are (a) double and triple substitutions of nucleotides within codons are negligible, (b) there is no mutation variation among nucleotides of a single codon and (c) assuming HKY nucleotide model is sufficient to capture essence of transition- transversion rates at nucleotide level. In this thesis, I develop a framework of mechanistic codon models, named KCM-based model family framework, based on holding or relaxing the mentioned assumptions. Accordingly, eight different models are proposed from eight combinations of holding or relaxing the assumptions from the simplest one that holds all the assumptions to the most general one that relaxes all of them. The models derived from the proposed framework allow me to investigate the biological plausibility of the three simplified assumptions on real data sets as well as finding the best model that is aligned with the underlying characteristics of the data sets. -- Avec l'avancement de séquençage à haut débit et l'augmentation dramatique des données géné¬tiques disponibles, la modélisation statistique est devenue un élément essentiel dans le domaine dé l'évolution moléculaire. Les résultats de la modélisation statistique dans de nombreuses découvertes intéressantes dans le domaine de la détection, de régions hautement conservées ou diverses dans un génome de l'inférence phylogénétique des espèces histoire évolutive. Parmi les différents types de séquences du génome, les régions codantes de protéines sont particulièrement intéressants en raison de leur impact sur les protéines. Les blocs de construction des protéines, à savoir les acides aminés, sont codés par des triplets de nucléotides, appelés codons. Par conséquent, l'étude de l'évolution des codons mène à la compréhension fondamentale de la façon dont les protéines fonctionnent et évoluent. Les modèles de codons actuels peuvent être classés en trois groupes principaux : les modèles de codons mécanistes, les modèles de codons empiriques et les hybrides. Les modèles mécanistes saisir une attention particulière en raison de la clarté de leurs hypothèses et les paramètres biologiques sous-jacents. Cependant, ils souffrent d'hypothèses simplificatrices qui permettent de surmonter le fardeau de la complexité des calculs. Les principales hypothèses retenues pour les modèles actuels de codons mécanistes sont : a) substitutions doubles et triples de nucleotides dans les codons sont négligeables, b) il n'y a pas de variation de la mutation chez les nucléotides d'un codon unique, et c) en supposant modèle nucléotidique HKY est suffisant pour capturer l'essence de taux de transition transversion au niveau nucléotidique. Dans cette thèse, je poursuis deux objectifs principaux. Le premier objectif est de développer un cadre de modèles de codons mécanistes, nommé cadre KCM-based model family, sur la base de la détention ou de l'assouplissement des hypothèses mentionnées. En conséquence, huit modèles différents sont proposés à partir de huit combinaisons de la détention ou l'assouplissement des hypothèses de la plus simple qui détient toutes les hypothèses à la plus générale qui détend tous. Les modèles dérivés du cadre proposé nous permettent d'enquêter sur la plausibilité biologique des trois hypothèses simplificatrices sur des données réelles ainsi que de trouver le meilleur modèle qui est aligné avec les caractéristiques sous-jacentes des jeux de données. Nos expériences montrent que, dans aucun des jeux de données réelles, tenant les trois hypothèses mentionnées est réaliste. Cela signifie en utilisant des modèles simples qui détiennent ces hypothèses peuvent être trompeuses et les résultats de l'estimation inexacte des paramètres. Le deuxième objectif est de développer un modèle mécaniste de codon généralisée qui détend les trois hypothèses simplificatrices, tandis que d'informatique efficace, en utilisant une opération de matrice appelée produit de Kronecker. Nos expériences montrent que sur un jeux de données choisis au hasard, le modèle proposé de codon mécaniste généralisée surpasse autre modèle de codon par rapport à AICc métrique dans environ la moitié des ensembles de données. En outre, je montre à travers plusieurs expériences que le modèle général proposé est biologiquement plausible.
Resumo:
Because of the increase in workplace automation and the diversification of industrial processes, workplaces have become more and more complex. The classical approaches used to address workplace hazard concerns, such as checklists or sequence models, are, therefore, of limited use in such complex systems. Moreover, because of the multifaceted nature of workplaces, the use of single-oriented methods, such as AEA (man oriented), FMEA (system oriented), or HAZOP (process oriented), is not satisfactory. The use of a dynamic modeling approach in order to allow multiple-oriented analyses may constitute an alternative to overcome this limitation. The qualitative modeling aspects of the MORM (man-machine occupational risk modeling) model are discussed in this article. The model, realized on an object-oriented Petri net tool (CO-OPN), has been developed to simulate and analyze industrial processes in an OH&S perspective. The industrial process is modeled as a set of interconnected subnets (state spaces), which describe its constitutive machines. Process-related factors are introduced, in an explicit way, through machine interconnections and flow properties. While man-machine interactions are modeled as triggering events for the state spaces of the machines, the CREAM cognitive behavior model is used in order to establish the relevant triggering events. In the CO-OPN formalism, the model is expressed as a set of interconnected CO-OPN objects defined over data types expressing the measure attached to the flow of entities transiting through the machines. Constraints on the measures assigned to these entities are used to determine the state changes in each machine. Interconnecting machines implies the composition of such flow and consequently the interconnection of the measure constraints. This is reflected by the construction of constraint enrichment hierarchies, which can be used for simulation and analysis optimization in a clear mathematical framework. The use of Petri nets to perform multiple-oriented analysis opens perspectives in the field of industrial risk management. It may significantly reduce the duration of the assessment process. But, most of all, it opens perspectives in the field of risk comparisons and integrated risk management. Moreover, because of the generic nature of the model and tool used, the same concepts and patterns may be used to model a wide range of systems and application fields.