936 resultados para Higher-order functions
Resumo:
The examination of Workplace Aggression as a global construct conceptualization has gained considerable attention over the past few years as organizations work to better understand and address the occurrence and consequences of this challenging construct. The purpose of this dissertation is to build on previous efforts to validate the appropriateness and usefulness of a global conceptualization of the workplace aggression construct. ^ This dissertation has been broken up into two parts: Part 1 utilized a Confirmatory Factor Analysis approach in order to assess the existence of workplace aggression as a global construct; Part 2 utilized a series of correlational analyses to examine the relationship between a selection of commonly experienced individual strain based outcomes and the global construct conceptualization assessed in Part 1. Participants were a diverse sample of 219 working individuals from Amazon’s Mechanical Turk participant pool. ^ Results of Part 1 did not show support for a one-factor global construct conceptualization of the workplace aggression construct. However, support was shown for a higher-order five-factor model of the construct, suggesting that it may be possible to conceptualize workplace aggression as an overarching construct that is made up of separate workplace aggression constructs. Results of Part 2 showed support for the relationships between an existing global construct workplace aggression conceptualization and a series of strain-based outcomes. Utilizing correlational analyses, additional post-hoc analyses showed that individual factors such as emotional intelligence and personality are related to the experience of workplace aggression. Further, utilizing moderated regression analysis, the results demonstrated that individuals experiencing high levels of workplace aggression reported higher job satisfaction when they felt strongly that the aggressive act was highly visible, and similarly, when they felt that there was a clear intent to cause harm. ^ Overall, the findings of this dissertation do support the need for a simplification of its current state of measurement. Future research should continue to examine workplace aggression in an effort to shed additional light on the structure and usefulness of this complex construct.^
Resumo:
Ensemble Stream Modeling and Data-cleaning are sensor information processing systems have different training and testing methods by which their goals are cross-validated. This research examines a mechanism, which seeks to extract novel patterns by generating ensembles from data. The main goal of label-less stream processing is to process the sensed events to eliminate the noises that are uncorrelated, and choose the most likely model without over fitting thus obtaining higher model confidence. Higher quality streams can be realized by combining many short streams into an ensemble which has the desired quality. The framework for the investigation is an existing data mining tool. First, to accommodate feature extraction such as a bush or natural forest-fire event we make an assumption of the burnt area (BA*), sensed ground truth as our target variable obtained from logs. Even though this is an obvious model choice the results are disappointing. The reasons for this are two: One, the histogram of fire activity is highly skewed. Two, the measured sensor parameters are highly correlated. Since using non descriptive features does not yield good results, we resort to temporal features. By doing so we carefully eliminate the averaging effects; the resulting histogram is more satisfactory and conceptual knowledge is learned from sensor streams. Second is the process of feature induction by cross-validating attributes with single or multi-target variables to minimize training error. We use F-measure score, which combines precision and accuracy to determine the false alarm rate of fire events. The multi-target data-cleaning trees use information purity of the target leaf-nodes to learn higher order features. A sensitive variance measure such as f-test is performed during each node’s split to select the best attribute. Ensemble stream model approach proved to improve when using complicated features with a simpler tree classifier. The ensemble framework for data-cleaning and the enhancements to quantify quality of fitness (30% spatial, 10% temporal, and 90% mobility reduction) of sensor led to the formation of streams for sensor-enabled applications. Which further motivates the novelty of stream quality labeling and its importance in solving vast amounts of real-time mobile streams generated today.
Resumo:
Les septines sont des GTPases conservées dérégulées dans le cancer et les maladies neurodégénératives. Elles servent de protéines d’échafaudage et forment une barrière de diffusion à la membrane plasmique et au corps central lors de la cytokinèse. Elles interagissent avec l’actine et s’organisent en complexes qui polymérisent et forment des structures hautement organisées (anneaux et filaments). Leur dynamique d’assemblage et leur rôle dans la cellule restent à être élucidés. La Drosophile est un modèle simple pour l’étude des septines puisqu’on n’y retrouve que 5 gènes (sep1, sep2, sep4, sep5, peanut) comparativement aux 13 gènes chez l’humain. À l’aide d’un anticorps contre Pnut, nous avons identifié des structures tubulaires dans 30% des cellules S2 de Drosophile. Mon projet a comme but de caractériser ces tubes en élucidant leurs constituants, leur comportement et leurs propriétés pour mieux clarifier le mécanisme par lequel les septines forment des structures hautement organisées et interagissent avec le cytosquelette d’actine. Par immunofluorescence, j’ai pu démontrer que ces tubes sont cytoplasmiques, en mitose ou interphase, ce qui suggère qu’ils ne sont pas régulés par le cycle cellulaire. Pour investiguer la composition et les propriétés dynamiques de ces tubes, j’ai généré une lignée cellulaire exprimant Sep2-GFP qui se localise aux tubes et des ARNi contre les cinq septines. Trois septines sont importantes pour la formation de ces tubes et anneaux notamment Sep1, Sep2 et Pnut. La déplétion de Sep1 cause la dispersion du signal GFP en flocons, tandis que la déplétion de Sep2 ou de Pnut mène à la dispersion du signal GFP uniformément dans la cellule. Des expériences de FRAP sur la lignée Sep2-GFP révèlent un signal de retour très lent, ce qui indique que ces structures sont très stables. J’ai aussi démontré une relation entre l’actine et les septines. Le traitement avec la Latrunculin A (un inhibiteur de la polymérisation de l’actine) ou la Jasplakinolide (un stabilisateur des filaments d’actine) mène à la dépolymérisation rapide (< 30 min) des tubes en anneaux flottants dans le cytoplasme, même si ces tubes ne sont pas reconnus suite à un marquage de la F-actine. L’Actin05C-mCherry se localise aux tubes, tandis que le mutant déficient de la polymérisation, Actin05C-R62D-mCherry perd cette localisation. On observe aussi que la déplétion de la Cofiline et de l’AIP1 (ce qui déstabilise l’actine) mène au même phénotype que le traitement avec la Latrunculine A ou la Jasplakinolide. Alors on peut conclure qu’un cytosquelette d’actine dynamique est nécessaire pour la formation et le maintien des tubes de septines. Les futures études auront comme but de mieux comprendre l’organisation des septines en structures hautement organisées et leur relation avec l’actine. Ceci sera utile pour l’élaboration du réseau d’interactions des septines qui pourra servir à expliquer leur dérégulation dans le cancer et les maladies neurodégénératives.
Resumo:
A confirmatory attempt is made to assess the validity of a hierarchic structural model of fears. Using a sample comprising 1,980 adult volunteers in Portugal, the present study set out to delineate the multidimensional structure and hierarchic organization of a large set of feared stimuli by contrasting a higher-order model comprising general fear at the highest level against a first-order model and a unitary fear model. Following a refinement of the original model, support was found for a five-factor model on a first-order level, namely (1) Social fears, (2) Agoraphobic fears, (3) Fears of bodily injury, death and illness, (4) Fears of display to aggressive scenes, and (5) Harmless animals fears. These factors in turn loaded on a General fear factor at the second-order level. However, the firstorder model was as parsimonious as a hierarchic higher-order model. The hierarchic model supports a quantitative hierarchic approach which decomposes fear disorders into agoraphobic, social, and specific (animal and bloodinjury) fears.
Resumo:
Fire has been always a major concern for designers of steel and concrete structures. Designing fire-resistant structural elements is not an easy task due to several limitations such as the lack of fire-resistant construction materials. Concrete reinforcement cover and external insulation are the most commonly adopted systems to protect concrete and steel from overheating, while spalling of concrete is minimised by using HPFRC instead of standard concrete. Although these methodologies work very well for low rise concrete structures, this is not the case for high-rise and inaccessible buildings where fire loading is much longer. Fire can permanently damage structures that cost a lot of money. This is unsafe and can lead to loss of life. In this research, the author proposes a new type of main reinforcement for concrete structures which can provide better fire-resistance than steel or FRP re-bars. This consists of continuous braided fibre rope, generally made from fire-resistant materials such as carbon or glass fibre. These fibres have excellent tensile strengths, sometimes in excess of ten times greater than steel. In addition to fire-resistance, these ropes can produce lighter and corrosive resistant structures. Avoiding the use of expensive resin binders, fibres are easily bound together using braiding techniques, ensuring that tensile stress is evenly distributed throughout the reinforcement. In order to consider braided ropes as a form of reinforcement it is first necessary to establish the mechanical performance at room temperature and investigate the pull-out resistance for both unribbed and ribbed ropes. Ribbing of ropes was achieved by braiding the rope over a series of glass beads. Adhesion between the rope and concrete was drastically improved due to ribbing, and further improved by pre-stressing ropes and reducing the slacked fibres. Two types of material have been considered for the ropes: carbon and aramid. An implicit finite element approach is proposed to model braided fibres using Total Lagrangian formulation, based on the theory of small strains and large rotations. Modelling tows and strands as elastic transversely isotropic materials was a good assumption when stiff and brittle fibres such as carbon and glass fibres are considered. The rope-to-concrete and strand-to-strand bond interaction/adhesion was numerically simulated using newly proposed hierarchical higher order interface elements. Elastic and linear damage cohesive models were used effectively to simulate non-penetrative 'free' sliding interaction between strands, and the adhesion between ropes and concrete respectively. Numerical simulation showed similar de-bonding features when compared with experimental pull-out results of braided ribbed rope reinforced concrete.
Resumo:
Structured abstract Purpose: To deepen, in grocery retail context, the roles of consumer perceived value and consumer satisfaction, as antecedents’ dimensions of customer loyalty intentions. Design/Methodology/approach: Also employing a short version (12-items) of the original 19-item PERVAL scale of Sweeney & Soutar (2001), a structural equation modeling approach was applied to investigate statistical properties of the indirect influence on loyalty of a reflective second order customer perceived value model. The performance of three alternative estimation methods was compared through bootstrapping techniques. Findings: Results provided i) support for the use of the short form of the PERVAL scale in measuring consumer perceived value; ii) the influence of the four highly correlated independent latent predictors on satisfaction was well summarized by a higher-order reflective specification of consumer perceived value; iii) emotional and functional dimensions were determinants for the relationship with the retailer; iv) parameter’s bias with the three methods of estimation was only significant for bootstrap small sample sizes. Research limitations:/implications: Future research is needed to explore the use of the short form of the PERVAL scale in more homogeneous groups of consumers. Originality/value: Firstly, to indirectly explain customer loyalty mediated by customer satisfaction it was adopted a recent short form of PERVAL scale and a second order reflective conceptualization of value. Secondly, three alternative estimation methods were used and compared through bootstrapping and simulation procedures.
Resumo:
Fear of Missing Out (FoMO) is a pervasive apprehension that others might be having rewarding experiences from which one is absent. Consequently, individuals experiencing FoMO wish to stay constantly in contact with what others are doing and engage with social networking sites for this purpose. In recent times, FoMO has received increased attention from psychological research, as a minority of users experiencing high levels of FoMO - particularly young people - might develop a problematic social networking site use, defined as the maladaptive and excessive use of social networking sites, resulting in symptoms associated with other addictions. According to the theoretical framework of the Interaction of Person-Affect-Cognition- Execution (I-PACE) model, FoMO and certain motives for use may foster problematic use in individuals who display unmet psychosocial needs. However, to date, the I-PACE model has only conceptualized the general higher-order mechanisms related to the development of problematic use. Consistently, the overall purpose of this dissertation was to deepen the understanding of the mediating role of FoMO between specific predisposing variables and problematic social networking sites use. Adopting a psychological approach, two empirical and exploratory cross-sectional studies, conceived as independent research, were conducted through path analysis.
Resumo:
The recent widespread use of social media platforms and web services has led to a vast amount of behavioral data that can be used to model socio-technical systems. A significant part of this data can be represented as graphs or networks, which have become the prevalent mathematical framework for studying the structure and the dynamics of complex interacting systems. However, analyzing and understanding these data presents new challenges due to their increasing complexity and diversity. For instance, the characterization of real-world networks includes the need of accounting for their temporal dimension, together with incorporating higher-order interactions beyond the traditional pairwise formalism. The ongoing growth of AI has led to the integration of traditional graph mining techniques with representation learning and low-dimensional embeddings of networks to address current challenges. These methods capture the underlying similarities and geometry of graph-shaped data, generating latent representations that enable the resolution of various tasks, such as link prediction, node classification, and graph clustering. As these techniques gain popularity, there is even a growing concern about their responsible use. In particular, there has been an increased emphasis on addressing the limitations of interpretability in graph representation learning. This thesis contributes to the advancement of knowledge in the field of graph representation learning and has potential applications in a wide range of complex systems domains. We initially focus on forecasting problems related to face-to-face contact networks with time-varying graph embeddings. Then, we study hyperedge prediction and reconstruction with simplicial complex embeddings. Finally, we analyze the problem of interpreting latent dimensions in node embeddings for graphs. The proposed models are extensively evaluated in multiple experimental settings and the results demonstrate their effectiveness and reliability, achieving state-of-the-art performances and providing valuable insights into the properties of the learned representations.
Transnational study of roles/functions and associated ICT competencies for Higher Education teachers
Resumo:
Aquest estudi forma part del projecte eLene-TLC1 Virtual Campus (2007-2008) recolzat pel programa eLearning de la Comissió Europea. L'objectiu d'aquest projecte és que els professors i els estudiants facin el millor ús possible de les TIC en l'educació superior, preparant als professors per als estudiants de la generació xarxa, permetent als estudiants a la transferència de coneixements i pràctiques de la vida quotidiana per al seu aprenentatge i estimular tant la integració plena de pràctiques innovadores d'ensenyament i d'aprenentatge possibilitades per un entorn tecnològic en constant evolució. Per tal de cobrir part d'aquest objectiu general, es va concebre un estudi per examinar les competències en TIC professors d'Educació Superior en entorns d'aprenentatge en línia.
Resumo:
In this work, the energy response functions of a CdTe detector were obtained by Monte Carlo (MC) simulation in the energy range from 5 to 160keV, using the PENELOPE code. In the response calculations the carrier transport features and the detector resolution were included. The computed energy response function was validated through comparison with experimental results obtained with (241)Am and (152)Eu sources. In order to investigate the influence of the correction by the detector response at diagnostic energy range, x-ray spectra were measured using a CdTe detector (model XR-100T, Amptek), and then corrected by the energy response of the detector using the stripping procedure. Results showed that the CdTe exhibits good energy response at low energies (below 40keV), showing only small distortions on the measured spectra. For energies below about 80keV, the contribution of the escape of Cd- and Te-K x-rays produce significant distortions on the measured x-ray spectra. For higher energies, the most important correction is the detector efficiency and the carrier trapping effects. The results showed that, after correction by the energy response, the measured spectra are in good agreement with those provided by a theoretical model of the literature. Finally, our results showed that the detailed knowledge of the response function and a proper correction procedure are fundamental for achieving more accurate spectra from which quality parameters (i.e., half-value layer and homogeneity coefficient) can be determined.
Resumo:
The higher education system in Europe is currently under stress and the debates over its reform and future are gaining momentum. Now that, for most countries, we are in a time for change, in the overall society and the whole education system, the legal and political dimensions have gained prominence, which has not been followed by a more integrative approach of the problem of order, its reform and the issue of regulation, beyond the typical static and classical cost-benefit analyses. The two classical approaches for studying (and for designing the policy measures of) the problem of the reform of the higher education system - the cost-benefit analysis and the legal scholarship description - have to be integrated. This is the argument of our paper that the very integration of economic and legal approaches, what Warren Samuels called the legal-economic nexus, is meaningful and necessary, especially if we want to address the problem of order (as formulated by Joseph Spengler) and the overall regulation of the system. On the one hand, and without neglecting the interest and insights gained from the cost-benefit analysis, or other approaches of value for money assessment, we will focus our study on the legal, social and political aspects of the regulation of the higher education system and its reform in Portugal. On the other hand, the economic and financial problems have to be taken into account, but in a more inclusive way with regard to the indirect and other socio-economic costs not contemplated in traditional or standard assessments of policies for the tertiary education sector. In the first section of the paper, we will discuss the theoretical and conceptual underpinning of our analysis, focusing on the evolutionary approach, the role of critical institutions, the legal-economic nexus and the problem of order. All these elements are related to the institutional tradition, from Veblen and Commons to Spengler and Samuels. The second section states the problem of regulation in the higher education system and the issue of policy formulation for tackling the problem. The current situation is clearly one of crisis with the expansion of the cohorts of young students coming to an end and the recurrent scandals in private institutions. In the last decade, after a protracted period of extension or expansion of the system, i. e., the continuous growth of students, universities and other institutions are competing harder to gain students and have seen their financial situation at risk. It seems that we are entering a period of radical uncertainty, higher competition and a new configuration that is slowly building up is the growth in intensity, which means upgrading the quality of the higher learning and getting more involvement in vocational training and life-long learning. With this change, and along with other deep ones in the Portuguese society and economy, the current regulation has shown signs of maladjustment. The third section consists of our conclusions on the current issue of regulation and policy challenge. First, we underline the importance of an evolutionary approach to a process of change that is essentially dynamic. A special attention will be given to the issues related to an evolutionary construe of policy analysis and formulation. Second, the integration of law and economics, through the notion of legal economic nexus, allows us to better define the issues of regulation and the concrete problems that the universities are facing. One aspect is the instability of the political measures regarding the public administration and on which the higher education system depends financially, legally and institutionally, to say the least. A corollary is the lack of clear strategy in the policy reforms. Third, our research criticizes several studies, such as the one made by the OECD in late 2006 for the Ministry of Science, Technology and Higher Education, for being too static and neglecting fundamental aspects of regulation such as the logic of actors, groups and organizations who are major players in the system. Finally, simply changing the legal rules will not necessary per se change the behaviors that the authorities want to change. By this, we mean that it is not only remiss of the policy maker to ignore some of the critical issues of regulation, namely the continuous non-respect by academic management and administrative bodies of universities of the legal rules that were once promulgated. Changing the rules does not change the problem, especially without the necessary debates form the different relevant quarters that make up the higher education system. The issues of social interaction remain as intact. Our treatment of the matter will be organized in the following way. In the first section, the theoretical principles are developed in order to be able to study more adequately the higher education transformation with a modest evolutionary theory and a legal and economic nexus of the interactions of the system and the policy challenges. After describing, in the second section, the recent evolution and current working of the higher education in Portugal, we will analyze the legal framework and the current regulatory practices and problems in light of the theoretical framework adopted. We will end with some conclusions on the current problems of regulation and the policy measures that are discusses in recent years.
Resumo:
This paper addresses limit cycles and signal propagation in dynamical systems with backlash. The study follows the describing function (DF) method for approximate analysis of nonlinearities and generalizes it in the perspective of the fractional calculus. The concept of fractional order describing function (FDF) is illustrated and the results for several numerical experiments are analysed. FDF leads to a novel viewpoint for limit cycle signal propagation as time-space waves within system structure.
Resumo:
La compréhension de processus biologiques complexes requiert des approches expérimentales et informatiques sophistiquées. Les récents progrès dans le domaine des stratégies génomiques fonctionnelles mettent dorénavant à notre disposition de puissants outils de collecte de données sur l’interconnectivité des gènes, des protéines et des petites molécules, dans le but d’étudier les principes organisationnels de leurs réseaux cellulaires. L’intégration de ces connaissances au sein d’un cadre de référence en biologie systémique permettrait la prédiction de nouvelles fonctions de gènes qui demeurent non caractérisées à ce jour. Afin de réaliser de telles prédictions à l’échelle génomique chez la levure Saccharomyces cerevisiae, nous avons développé une stratégie innovatrice qui combine le criblage interactomique à haut débit des interactions protéines-protéines, la prédiction de la fonction des gènes in silico ainsi que la validation de ces prédictions avec la lipidomique à haut débit. D’abord, nous avons exécuté un dépistage à grande échelle des interactions protéines-protéines à l’aide de la complémentation de fragments protéiques. Cette méthode a permis de déceler des interactions in vivo entre les protéines exprimées par leurs promoteurs naturels. De plus, aucun biais lié aux interactions des membranes n’a pu être mis en évidence avec cette méthode, comparativement aux autres techniques existantes qui décèlent les interactions protéines-protéines. Conséquemment, nous avons découvert plusieurs nouvelles interactions et nous avons augmenté la couverture d’un interactome d’homéostasie lipidique dont la compréhension demeure encore incomplète à ce jour. Par la suite, nous avons appliqué un algorithme d’apprentissage afin d’identifier huit gènes non caractérisés ayant un rôle potentiel dans le métabolisme des lipides. Finalement, nous avons étudié si ces gènes et un groupe de régulateurs transcriptionnels distincts, non préalablement impliqués avec les lipides, avaient un rôle dans l’homéostasie des lipides. Dans ce but, nous avons analysé les lipidomes des délétions mutantes de gènes sélectionnés. Afin d’examiner une grande quantité de souches, nous avons développé une plateforme à haut débit pour le criblage lipidomique à contenu élevé des bibliothèques de levures mutantes. Cette plateforme consiste en la spectrométrie de masse à haute resolution Orbitrap et en un cadre de traitement des données dédié et supportant le phénotypage des lipides de centaines de mutations de Saccharomyces cerevisiae. Les méthodes expérimentales en lipidomiques ont confirmé les prédictions fonctionnelles en démontrant certaines différences au sein des phénotypes métaboliques lipidiques des délétions mutantes ayant une absence des gènes YBR141C et YJR015W, connus pour leur implication dans le métabolisme des lipides. Une altération du phénotype lipidique a également été observé pour une délétion mutante du facteur de transcription KAR4 qui n’avait pas été auparavant lié au métabolisme lipidique. Tous ces résultats démontrent qu’un processus qui intègre l’acquisition de nouvelles interactions moléculaires, la prédiction informatique des fonctions des gènes et une plateforme lipidomique innovatrice à haut débit , constitue un ajout important aux méthodologies existantes en biologie systémique. Les développements en méthodologies génomiques fonctionnelles et en technologies lipidomiques fournissent donc de nouveaux moyens pour étudier les réseaux biologiques des eucaryotes supérieurs, incluant les mammifères. Par conséquent, le stratégie présenté ici détient un potentiel d’application au sein d’organismes plus complexes.
Resumo:
The brain with its highly complex structure made up of simple units,imterconnected information pathways and specialized functions has always been an object of mystery and sceintific fascination for physiologists,neuroscientists and lately to mathematicians and physicists. The stream of biophysicists are engaged in building the bridge between the biological and physical sciences guided by a conviction that natural scenarios that appear extraordinarily complex may be tackled by application of principles from the realm of physical sciences. In a similar vein, this report aims to describe how nerve cells execute transmission of signals ,how these are put together and how out of this integration higher functions emerge and get reflected in the electrical signals that are produced in the brain.Viewing the E E G Signal through the looking glass of nonlinear theory, the dynamics of the underlying complex system-the brain ,is inferred and significant implications of the findings are explored.