906 resultados para Self-organizing cloud
Resumo:
In just a few years cloud computing has become a very popular paradigm and a business success story, with storage being one of the key features. To achieve high data availability, cloud storage services rely on replication. In this context, one major challenge is data consistency. In contrast to traditional approaches that are mostly based on strong consistency, many cloud storage services opt for weaker consistency models in order to achieve better availability and performance. This comes at the cost of a high probability of stale data being read, as the replicas involved in the reads may not always have the most recent write. In this paper, we propose a novel approach, named Harmony, which adaptively tunes the consistency level at run-time according to the application requirements. The key idea behind Harmony is an intelligent estimation model of stale reads, allowing to elastically scale up or down the number of replicas involved in read operations to maintain a low (possibly zero) tolerable fraction of stale reads. As a result, Harmony can meet the desired consistency of the applications while achieving good performance. We have implemented Harmony and performed extensive evaluations with the Cassandra cloud storage on Grid?5000 testbed and on Amazon EC2. The results show that Harmony can achieve good performance without exceeding the tolerated number of stale reads. For instance, in contrast to the static eventual consistency used in Cassandra, Harmony reduces the stale data being read by almost 80% while adding only minimal latency. Meanwhile, it improves the throughput of the system by 45% while maintaining the desired consistency requirements of the applications when compared to the strong consistency model in Cassandra.
Resumo:
Otto-von-Guericke-Universität Magdeburg, Fakultät für Informatik, Dissertation, 2016
Resumo:
Clouds are the largest source of uncertainty in climate science, and remain a weak link in modeling tropical circulation. A major challenge is to establish connections between particulate microphysics and macroscale turbulent dynamics in cumulus clouds. Here we address the issue from the latter standpoint. First we show how to create bench-scale flows that reproduce a variety of cumulus-cloud forms (including two genera and three species), and track complete cloud life cycles-e.g., from a ``cauliflower'' congestus to a dissipating fractus. The flow model used is a transient plume with volumetric diabatic heating scaled dynamically to simulate latent-heat release from phase changes in clouds. Laser-based diagnostics of steady plumes reveal Riehl-Malkus type protected cores. They also show that, unlike the constancy implied by early self-similar plume models, the diabatic heating raises the Taylor entrainment coefficient just above cloud base, depressing it at higher levels. This behavior is consistent with cloud-dilution rates found in recent numerical simulations of steady deep convection, and with aircraft-based observations of homogeneous mixing in clouds. In-cloud diabatic heating thus emerges as the key driver in cloud development, and could well provide a major link between microphysics and cloud- scale dynamics.
Resumo:
23rd Congress of the International Comission for Optics (ICO 23)
Resumo:
My goal was to describe how biological and ecological factors give shape to fishing practices that can contribute to the successful self-governance of a small-scale fishing system in the Gulf of California, Mexico. The analysis was based on a comparison of the main ecological and biological indicators that fishers claim to use to govern their day-to-day decision making about fishing and data collected in situ. I found that certain indicators allow fishers to learn about differences and characteristics of the resource system and its units. Fishers use such information to guide their day-to-day fishing decisions. More importantly, these decisions appear unable to shape the reproductive viability of the fishery because no indicators were correlated to the reproductive cycle of the target species. As a result, the fishing practices constitute a number of mechanisms that might provide short-term buffering capacity against perturbations or stress factors that otherwise would threaten the overall sustainability and self-governance of the system. The particular biological circumstances that shape the harvesting practices might also act as a precursor of self-governance because they provide fishers with enough incentives to meet the costs of organizing the necessary rule structure that underlies a successful self-governance system.
Resumo:
Les microcantileviers fonctionnalisés offrent une plateforme idéale pour la nano- et micro-mécanique et pour le développement de (bio-) capteurs tres sensible. Le principe d’opération consiste dans des évènements physicochimiques qui se passent du côté fonctionnalisé du microcantilevier induisant une différence de stress de surface entre les deux côtés du cantilevier qui cause une déflexion verticale du levier. Par contre, les facteurs et les phénomènes interfacials qui régissent la nature et l'intensité du stress de surface sont encore méconnus. Pour éclaircir ce phénomène, la première partie de cette thèse porte sur l'étude des réactions de microcantileviers qui sont recouverts d'or et fonctionnalisés par une monocouche auto-assemblée (MAA) électroactive. La formation d'une MAA de ferrocènylundécanethiol (FcC11SH) à la surface d'or d'un microcantilevier est le modèle utilisé pour mieux comprendre le stress de surface induit par l’électrochimie. Les résultats obtenus démontrent qu'une transformation rédox de la MAA de FcC11SH crée un stress de surface qui résulte dans une déflexion verticale du microcantilevier. Dépendamment de la flexibilité du microcantilevier, cette déflexion peut varier de quelques nanomètres à quelques micromètres. L’oxydation de cette MAA de FcC11SH dans un environnement d'ions perchlorate génère un changement de stress de surface compressive. Les résultats indiquent que la déflexion du microcantilevier est due à une tension latérale provenant d'une réorientation et d'une expansion moléculaire lors du transfért de charge et de pairage d’anions. Pour vérifier cette hypothèse, les mêmes expériences ont été répéteés avec des microcantileviers qui ont été couverts d'une MAA mixte, où les groupements électroactifs de ferrocène sont isolés par des alkylthiols inactifs. Lorsqu’un potentiel est appliqué, un courant est détecté mais le microcantilevier ne signale aucune déflexion. Ces résultats confirment que la déflexion du microcantilevier est due à une pression latérale provenant du ferrocènium qui se réorganise et qui crée une pression sur ses pairs avoisinants plutôt que du couplage d’anions. L’amplitude de la déflexion verticale du microcantilevier dépend de la structure moléculaire de la MAA et du le type d’anion utilisés lors de la réaction électrochimique. Dans la prochaine partie de la thèse, l’électrochimie et la spectroscopie de résonance de plasmon en surface ont été combinées pour arriver à une description de l’adsorption et de l’agrégation des n-alkyl sulfates à l’interface FcC11SAu/électrolyte. À toutes les concentrations de solution, les molécules d'agent tensio-actif sont empilées perpendiculairement à la surface d'électrode sous forme de monocouche condensé entrecroisé. Cependant, la densité du film spécifiquement adsorbé s'est avérée être affectée par l'état d'organisation des agents tensio-actifs en solution. À faible concentration, où les molécules d'agent tensio-actif sont présentes en tant que monomères solvatés, les monomères peuvent facilement s'adapter à l’évolution de la concentration en surface du ferrocènium lors du balayage du potential. Cependant, lorsque les molécules sont présentes en solution en tant que micelles une densité plus faible d'agent tensio-actif a été trouvée en raison de l'incapacité de répondre effectivement à la surface de ferrocenium générée dynamiquement.
Resumo:
Charged aerosol particles and water droplets are abundant throughout the lower atmosphere, and may influence interactions between small cloud droplets. This note describes a small, disposable sensor for the measurement of charge in non-thunderstorm cloud, which is an improvement of an earlier sensor [K. A. Nicoll and R. G. Harrison, Rev. Sci. Instrum. 80, 014501 (2009)]. The sensor utilizes a self-calibrating current measurement method. It is designed for use on a free balloon platform alongside a standard meteorological radiosonde, measuring currents from 2 fA to 15 pA and is stable to within 5 fA over a temperature range of 5 °C to −60 °C. During a balloon flight with the charge sensor through a stratocumulus cloud, charge layers up to 40 pC m−3 were detected on the cloud edges.
Resumo:
In this study we report detailed information on the internal structure of PNIPAM-b-PEG-b-PNIPAM nanoparticles formed from self-assembly in aqueous solutions upon increase in temperature. NMR spectroscopy, light scattering and small-angle neutron scattering (SANS) were used to monitor different stages of nanoparticle formation as a function of temperature, providing insight into the fundamental processes involved. The presence of PEG in a copolymer structure significantly affects the formation of nanoparticles, making their transition to occur over a broader temperature range. The crucial parameter that controls the transition is the ratio of PEG/PNIPAM. For pure PNIPAM, the transition is sharp; the higher the PEG/PNIPAM ratio results in a broader transition. This behavior is explained by different mechanisms of PNIPAM block incorporation during nanoparticle formation at different PEG/PNIPAM ratios. Contrast variation experiments using SANS show that the structure of nanoparticles above cloud point temperatures for PNIPAM-b-PEG-b-PNIPAM copolymers is drastically different from the structure of PNIPAM mesoglobules. In contrast with pure PNIPAM mesoglobules, where solid-like particles and chain network with a mesh size of 1-3 nm are present; nanoparticles formed from PNIPAM-b-PEG-b-PNIPAM copolymers have non-uniform structure with “frozen” areas interconnected by single chains in Gaussian conformation. SANS data with deuterated “invisible” PEG blocks imply that PEG is uniformly distributed inside of a nanoparticle. It is kinetically flexible PEG blocks which affect the nanoparticle formation by prevention of PNIPAM microphase separation.
Resumo:
In a recent study we demonstrated the emergence of turbulence in a trapped Bose-Einstein condensate of Rb-87 atoms. An intriguing observation in such a system is the behavior of the turbulent cloud during free expansion. The aspect ratio of the cloud size does not change in the way one would expect for an ordinary non-rotating (vortex-free) condensate. Here we show that the anomalous expansion can be understood, at least qualitatively, in terms of the presence of vorticity distributed throughout the cloud, effectively counteracting the usual reversal of the aspect ratio seen in free time-of-flight expansion of non-rotating condensates.
Resumo:
This work covers the synthesis of second-generation, ethylene glycol dendrons covalently linked to a surface anchor that contains two, three, or four catechol groups, the molecular assembly in aqueous buffer on titanium oxide surfaces, and the evaluation of the resistance of the monomolecular adlayers against nonspecific protein adsorption in contact with full blood serum. The results were compared to those of a linear poly(ethylene glycol) (PEG) analogue with the same molecular weight. The adsorption kinetics as well as resulting surface coverages were monitored by ex situ spectroscopic ellipsometry (VASE), in situ optical waveguide lightmode spectroscopy (OWLS), and quartz crystal microbalance with dissipation (QCM-D) investigations. The expected compositions of the macromolecular films were verified by X-ray photoelectron spectroscopy (XPS). The results of the adsorption study, performed in a high ionic strength ("cloud-point") buffer at room temperature, demonstrate that the adsorption kinetics increase with increasing number of catechol binding moieties and exceed the values found for the linear PEG analogue. This is attributed to the comparatively smaller and more confined molecular volume of the dendritic macromolecules in solution, the improved presentation of the catechol anchor, and/or their much lower cloud-point in the chosen buffer (close to room temperature). Interestingly, in terms of mechanistic aspects of "nonfouling" surface properties, the dendron films were found to be much stiffer and considerably less hydrated in comparison to the linear PEG brush surface, closer in their physicochemical properties to oligo(ethylene glycol) alkanethiol self-assembled monolayers than to conventional brush surfaces. Despite these differences, both types of polymer architectures at saturation coverage proved to be highly resistant toward protein adsorption. Although associated with higher synthesis costs, dendritic macromolecules are considered to be an attractive alternative to linear polymers for surface (bio)functionalization in view of their spontaneous formation of ultrathin, confluent, and nonfouling monolayers at room temperature and their outstanding ability to present functional ligands (coupled to the termini of the dendritic structure) at high surface densities.
Resumo:
The assessment of executive functions is an area of study that has seen considerable development in recent years. Despite much research examining the validity of various measures of executive functions from both a direct and indirect format, little evidence exists in the extant literature evaluating the correspondence between these types of measures. The current study examined the extent of correspondence, comprising concurrent validity, between the Delis-Kaplan Executive Function System (D-KEFS) and the Behavior Rating Inventory of Executive Function ¿ Self-Report Version (BRIEF-SR). Participants included 30 undergraduate and high school students 18 years of age. Results indicated mixed evidence of concurrent validity between the two measures of executive functions. The findings obtained suggest both expected significant, negative correlation as well as lack of expected correlation between the measures. Suggestions for future research in the assessment of executive functions are discussed.
Resumo:
Over the last decade, Grid computing paved the way for a new level of large scale distributed systems. This infrastructure made it possible to securely and reliably take advantage of widely separated computational resources that are part of several different organizations. Resources can be incorporated to the Grid, building a theoretical virtual supercomputer. In time, cloud computing emerged as a new type of large scale distributed system, inheriting and expanding the expertise and knowledge that have been obtained so far. Some of the main characteristics of Grids naturally evolved into clouds, others were modified and adapted and others were simply discarded or postponed. Regardless of these technical specifics, both Grids and clouds together can be considered as one of the most important advances in large scale distributed computing of the past ten years; however, this step in distributed computing has came along with a completely new level of complexity. Grid and cloud management mechanisms play a key role, and correct analysis and understanding of the system behavior are needed. Large scale distributed systems must be able to self-manage, incorporating autonomic features capable of controlling and optimizing all resources and services. Traditional distributed computing management mechanisms analyze each resource separately and adjust specific parameters of each one of them. When trying to adapt the same procedures to Grid and cloud computing, the vast complexity of these systems can make this task extremely complicated. But large scale distributed systems complexity could only be a matter of perspective. It could be possible to understand the Grid or cloud behavior as a single entity, instead of a set of resources. This abstraction could provide a different understanding of the system, describing large scale behavior and global events that probably would not be detected analyzing each resource separately. In this work we define a theoretical framework that combines both ideas, multiple resources and single entity, to develop large scale distributed systems management techniques aimed at system performance optimization, increased dependability and Quality of Service (QoS). The resulting synergy could be the key 350 J. Montes et al. to address the most important difficulties of Grid and cloud management.
Resumo:
Currently, student dropout rates are a matter of concern among universities. Many research studies, aimed at discovering the causes, have been carried out. However, few solutions, that could serve all students and related problems, have been proposed so far. One such problem is caused by the lack of the "knowledge chain educational links" that occurs when students move onto higher studies without mastering their basic studies. Most regulated studies imparted at universities are designed so that some basic subjects serve as support for other, more complicated, subjects, thus forming a complicated knowledge network. When a link in this chain fails, student frustration occurs as it prevents him from fully understanding the following educational links. In this proposal we try to mitigate these disasters that stem, for the most part, the student?s frustration beyond his college stay. On one hand, we make a dissertation on the student?s learning process, which we divide into a series of phases that amount to what we call the "learning lifecycle." Also, we analyze at what stage the action by the stakeholders involved in this scenario: teachers and students; is most important. On the other hand, we consider that Information and Communication Technologies ICT, such as Cloud Computing, can help develop new ways, allowing for the teaching of higher education, while easing and facilitating the student?s learning process. But, methods and processes need to be defined as to direct the use of such technologies; in the teaching process in general, and within higher education in particular; in order to achieve optimum results. Our methodology integrates, as another actor, the ICT into the "Learning Lifecycle". We stimulate students to stop being merely spectators of their own education, and encourage them to take an active part in their training process. To do this, we offer a set of useful tools to determine not only academic failure causes, (for self assessment), but also to remedy these failures (with corrective actions); "discovered the causes it is easier to determine solutions?. We believe this study will be useful for both students and teachers. Students learn from their own experience and improve their learning process, while obtaining all of the "knowledge chain educational links? required in their studies. We stand by the motto "Studying to learn instead of studying to pass". Teachers will also be benefited by detecting where and how to strengthen their teaching proposals. All of this will also result in decreasing dropout rates.
Resumo:
Una Red de Procesadores Evolutivos o NEP (por sus siglas en ingles), es un modelo computacional inspirado por el modelo evolutivo de las celulas, específicamente por las reglas de multiplicación de las mismas. Esta inspiración hace que el modelo sea una abstracción sintactica de la manipulation de information de las celulas. En particu¬lar, una NEP define una maquina de cómputo teorica capaz de resolver problemas NP completos de manera eficiente en tóerminos de tiempo. En la praóctica, se espera que las NEP simuladas en móaquinas computacionales convencionales puedan resolver prob¬lemas reales complejos (que requieran ser altamente escalables) a cambio de una alta complejidad espacial. En el modelo NEP, las cóelulas estóan representadas por palabras que codifican sus secuencias de ADN. Informalmente, en cualquier momento de cómputo del sistema, su estado evolutivo se describe como un coleccion de palabras, donde cada una de ellas representa una celula. Estos momentos fijos de evolucion se denominan configuraciones. De manera similar al modelo biologico, las palabras (celulas) mutan y se dividen en base a bio-operaciones sencillas, pero solo aquellas palabras aptas (como ocurre de forma parecida en proceso de selection natural) seran conservadas para la siguiente configuracióon. Una NEP como herramienta de computation, define una arquitectura paralela y distribuida de procesamiento simbolico, en otras palabras, una red de procesadores de lenguajes. Desde el momento en que el modelo fue propuesto a la comunidad científica en el año 2001, múltiples variantes se han desarrollado y sus propiedades respecto a la completitud computacional, eficiencia y universalidad han sido ampliamente estudiadas y demostradas. En la actualidad, por tanto, podemos considerar que el modelo teórico NEP se encuentra en el estadio de la madurez. La motivación principal de este Proyecto de Fin de Grado, es proponer una aproxi-mación práctica que permita dar un salto del modelo teórico NEP a una implantación real que permita su ejecucion en plataformas computacionales de alto rendimiento, con el fin de solucionar problemas complejos que demanda la sociedad actual. Hasta el momento, las herramientas desarrolladas para la simulation del modelo NEP, si bien correctas y con resultados satisfactorios, normalmente estón atadas a su entorno de ejecucion, ya sea el uso de hardware específico o implementaciones particulares de un problema. En este contexto, el propósito fundamental de este trabajo es el desarrollo de Nepfix, una herramienta generica y extensible para la ejecucion de cualquier algo¬ritmo de un modelo NEP (o alguna de sus variantes), ya sea de forma local, como una aplicación tradicional, o distribuida utilizando los servicios de la nube. Nepfix es una aplicacion software desarrollada durante 7 meses y que actualmente se encuentra en su segunda iteration, una vez abandonada la fase de prototipo. Nepfix ha sido disenada como una aplicacion modular escrita en Java 8 y autocontenida, es decir, no requiere de un entorno de ejecucion específico (cualquier maquina virtual de Java es un contenedor vólido). Nepfix contiene dos componentes o móodulos. El primer móodulo corresponde a la ejecución de una NEP y es por lo tanto, el simulador. Para su desarrollo, se ha tenido en cuenta el estado actual del modelo, es decir, las definiciones de los procesadores y filtros mas comunes que conforman la familia del modelo NEP. Adicionalmente, este componente ofrece flexibilidad en la ejecucion, pudiendo ampliar las capacidades del simulador sin modificar Nepfix, usando para ello un lenguaje de scripting. Dentro del desarrollo de este componente, tambióen se ha definido un estóandar de representacióon del modelo NEP basado en el formato JSON y se propone una forma de representation y codificación de las palabras, necesaria para la comunicación entre servidores. Adicional-mente, una característica importante de este componente, es que se puede considerar una aplicacion aislada y por tanto, la estrategia de distribution y ejecución son total-mente independientes. El segundo moódulo, corresponde a la distribucióon de Nepfix en la nube. Este de-sarrollo es el resultado de un proceso de i+D, que tiene una componente científica considerable. Vale la pena resaltar el desarrollo de este modulo no solo por los resul-tados prócticos esperados, sino por el proceso de investigation que se se debe abordar con esta nueva perspectiva para la ejecución de sistemas de computación natural. La principal característica de las aplicaciones que se ejecutan en la nube es que son gestionadas por la plataforma y normalmente se encapsulan en un contenedor. En el caso de Nepfix, este contenedor es una aplicacion Spring que utiliza el protocolo HTTP o AMQP para comunicarse con el resto de instancias. Como valor añadido, Nepfix aborda dos perspectivas de implementation distintas (que han sido desarrolladas en dos iteraciones diferentes) del modelo de distribution y ejecucion, que tienen un impacto muy significativo en las capacidades y restricciones del simulador. En concreto, la primera iteration utiliza un modelo de ejecucion asincrono. En esta perspectiva asincrona, los componentes de la red NEP (procesadores y filtros) son considerados como elementos reactivos a la necesidad de procesar una palabra. Esta implementation es una optimization de una topologia comun en el modelo NEP que permite utilizar herramientas de la nube para lograr un escalado transparente (en lo ref¬erente al balance de carga entre procesadores) pero produce efectos no deseados como indeterminacion en el orden de los resultados o imposibilidad de distribuir eficiente-mente redes fuertemente interconectadas. Por otro lado, la segunda iteration corresponde al modelo de ejecucion sincrono. Los elementos de una red NEP siguen un ciclo inicio-computo-sincronizacion hasta que el problema se ha resuelto. Esta perspectiva sincrona representa fielmente al modelo teórico NEP pero el proceso de sincronizacion es costoso y requiere de infraestructura adicional. En concreto, se requiere un servidor de colas de mensajes RabbitMQ. Sin embargo, en esta perspectiva los beneficios para problemas suficientemente grandes superan a los inconvenientes, ya que la distribuciín es inmediata (no hay restricciones), aunque el proceso de escalado no es trivial. En definitiva, el concepto de Nepfix como marco computacional se puede considerar satisfactorio: la tecnología es viable y los primeros resultados confirman que las carac-terísticas que se buscaban originalmente se han conseguido. Muchos frentes quedan abiertos para futuras investigaciones. En este documento se proponen algunas aproxi-maciones a la solucion de los problemas identificados como la recuperacion de errores y la division dinamica de una NEP en diferentes subdominios. Por otra parte, otros prob-lemas, lejos del alcance de este proyecto, quedan abiertos a un futuro desarrollo como por ejemplo, la estandarización de la representación de las palabras y optimizaciones en la ejecucion del modelo síncrono. Finalmente, algunos resultados preliminares de este Proyecto de Fin de Grado han sido presentados recientemente en formato de artículo científico en la "International Work-Conference on Artificial Neural Networks (IWANN)-2015" y publicados en "Ad-vances in Computational Intelligence" volumen 9094 de "Lecture Notes in Computer Science" de Springer International Publishing. Lo anterior, es una confirmation de que este trabajo mas que un Proyecto de Fin de Grado, es solo el inicio de un trabajo que puede tener mayor repercusion en la comunidad científica. Abstract Network of Evolutionary Processors -NEP is a computational model inspired by the evolution of cell populations, which might model some properties of evolving cell communities at the syntactical level. NEP defines theoretical computing devices able to solve NP complete problems in an efficient manner. In this model, cells are represented by words which encode their DNA sequences. Informally, at any moment of time, the evolutionary system is described by a collection of words, where each word represents one cell. Cells belong to species and their community evolves according to mutations and division which are defined by operations on words. Only those cells are accepted as surviving (correct) ones which are represented by a word in a given set of words, called the genotype space of the species. This feature is analogous with the natural process of evolution. Formally, NEP is based on an architecture for parallel and distributed processing, in other words, a network of language processors. Since the date when NEP was pro¬posed, several extensions and variants have appeared engendering a new set of models named Networks of Bio-inspired Processors (NBP). During this time, several works have proved the computational power of NBP. Specifically, their efficiency, universality, and computational completeness have been thoroughly investigated. Therefore, we can say that the NEP model has reached its maturity. The main motivation for this End of Grade project (EOG project in short) is to propose a practical approximation that allows to close the gap between theoretical NEP model and a practical implementation in high performing computational platforms in order to solve some of high the high complexity problems society requires today. Up until now tools developed to simulate NEPs, while correct and successful, are usu¬ally tightly coupled to the execution environment, using specific software frameworks (Hadoop) or direct hardware usage (GPUs). Within this context the main purpose of this work is the development of Nepfix, a generic and extensible tool that aims to execute algorithms based on NEP model and compatible variants in a local way, similar to a traditional application or in a distributed cloud environment. Nepfix as an application was developed during a 7 month cycle and is undergoing its second iteration once the prototype period was abandoned. Nepfix is designed as a modular self-contained application written in Java 8, that is, no additional external dependencies are required and it does not rely on an specific execution environment, any JVM is a valid container. Nepfix is made of two components or modules. The first module corresponds to the NEP execution and therefore simulation. During the development the current state of the theoretical model was used as a reference including most common filters and processors. Additionally extensibility is provided by the use of Python as a scripting language to run custom logic. Along with the simulation a definition language for NEP has been defined based on JSON as well as a mechanisms to represent words and their possible manipulations. NEP simulator is isolated from distribution and as mentioned before different applications that include it as a dependency are possible, the distribution of NEPs is an example of this. The second module corresponds to executing Nepfix in the cloud. The development carried a heavy R&D process since this front was not explored by other research groups until now. It's important to point out that the development of this module is not focused on results at this point in time, instead we focus on feasibility and discovery of this new perspective to execute natural computing systems and NEPs specifically. The main properties of cloud applications is that they are managed by the platform and are encapsulated in a container. For Nepfix a Spring application becomes the container and the HTTP or AMQP protocols are used for communication with the rest of the instances. Different execution perspectives were studied, namely asynchronous and synchronous models were developed for solving different kind of problems using NEPs. Different limitations and restrictions manifest in both models and are explored in detail in the respective chapters. In conclusion we can consider that Nepfix as a computational framework is suc-cessful: Cloud technology is ready for the challenge and the first results reassure that the properties Nepfix project pursued were met. Many investigation branches are left open for future investigations. In this EOG implementation guidelines are proposed for some of them like error recovery or dynamic NEP splitting. On the other hand other interesting problems that were not in the scope of this project were identified during development like word representation standardization or NEP model optimizations. As a confirmation that the results of this work can be useful to the scientific com-munity a preliminary version of this project was published in The International Work- Conference on Artificial Neural Networks (IWANN) in May 2015. Development has not stopped since that point and while Nepfix in it's current state can not be consid¬ered a final product the most relevant ideas, possible problems and solutions that were produced during the seven months development cycle are worthy to be gathered and presented giving a meaning to this EOG work.
Resumo:
This article argues against the merger folklore that maintains that a merger negatively affects well-being and work attitudes primarily through the threat of job insecurity. We hold that the workplace is not only a resource for fulfilling a person's financial needs, but that it is an important component of the self-concept in terms of identification with the organization, as explained by social identity theory. We unravel the key concepts of the social identity approach relevant to the analysis of mergers and review evidence from previous studies. Then, we present a study conducted during a merger to substantiate our ideas about the effects of post-merger organizational identification above and beyond the effects of perceived job insecurity. We recommend that managers should account for these psychological effects through the provision of continuity and specific types of communication. © 2006 British Academy of Management.