793 resultados para Learned institutions and societies.


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Software testing is a key aspect of software reliability and quality assurance in a context where software development constantly has to overcome mammoth challenges in a continuously changing environment. One of the characteristics of software testing is that it has a large intellectual capital component and can thus benefit from the use of the experience gained from past projects. Software testing can, then, potentially benefit from solutions provided by the knowledge management discipline. There are in fact a number of proposals concerning effective knowledge management related to several software engineering processes. Objective: We defend the use of a lesson learned system for software testing. The reason is that such a system is an effective knowledge management resource enabling testers and managers to take advantage of the experience locked away in the brains of the testers. To do this, the experience has to be gathered, disseminated and reused. Method: After analyzing the proposals for managing software testing experience, significant weaknesses have been detected in the current systems of this type. The architectural model proposed here for lesson learned systems is designed to try to avoid these weaknesses. This model (i) defines the structure of the software testing lessons learned; (ii) sets up procedures for lesson learned management; and (iii) supports the design of software tools to manage the lessons learned. Results: A different approach, based on the management of the lessons learned that software testing engineers gather from everyday experience, with two basic goals: usefulness and applicability. Conclusion: The architectural model proposed here lays the groundwork to overcome the obstacles to sharing and reusing experience gained in the software testing and test management. As such, it provides guidance for developing software testing lesson learned systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Toponomastics is increasingly interested in the subjective role of place names in quotidian life. In the frame of Urban Geography, the interest in this matter is currently growing, as the recently change in modes of habitation has urged our discipline to find new ways of exploring the cities. In this context, the study of how name's significance is connected to a urban society constitutes a very interesting approach. We believe in the importance of place names as tools for decoding urban areas and societies at a local-scale. This consideration has been frequently taken into account in the analysis of exonyms, although in their case they are not exempt of political and practical implications that prevail over the tool function. The study of toponomastic processes helps us understanding how the city works, by analyzing the liaison between urban landscape, imaginaries and toponyms which is reflected in the scarcity of some names, in the biased creation of new toponyms and in the pressure exercised over every place name by tourists, residents and local government for changing, maintaining or eliminating them. Our study-case, Toledo, is one of the oldest cities in Spain, full of myths, stories and histories that can only be understood combined with processes of internal evolution of the city linked to the arrival of new residents and the more and more notorious change of its historical landscape. At a local scale, we are willing to decode the information which is contained in its toponyms about its landscape and its society.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hoy en día, con la evolución continua y rápida de las tecnologías de la información y los dispositivos de computación, se recogen y almacenan continuamente grandes volúmenes de datos en distintos dominios y a través de diversas aplicaciones del mundo real. La extracción de conocimiento útil de una cantidad tan enorme de datos no se puede realizar habitualmente de forma manual, y requiere el uso de técnicas adecuadas de aprendizaje automático y de minería de datos. La clasificación es una de las técnicas más importantes que ha sido aplicada con éxito a varias áreas. En general, la clasificación se compone de dos pasos principales: en primer lugar, aprender un modelo de clasificación o clasificador a partir de un conjunto de datos de entrenamiento, y en segundo lugar, clasificar las nuevas instancias de datos utilizando el clasificador aprendido. La clasificación es supervisada cuando todas las etiquetas están presentes en los datos de entrenamiento (es decir, datos completamente etiquetados), semi-supervisada cuando sólo algunas etiquetas son conocidas (es decir, datos parcialmente etiquetados), y no supervisada cuando todas las etiquetas están ausentes en los datos de entrenamiento (es decir, datos no etiquetados). Además, aparte de esta taxonomía, el problema de clasificación se puede categorizar en unidimensional o multidimensional en función del número de variables clase, una o más, respectivamente; o también puede ser categorizado en estacionario o cambiante con el tiempo en función de las características de los datos y de la tasa de cambio subyacente. A lo largo de esta tesis, tratamos el problema de clasificación desde tres perspectivas diferentes, a saber, clasificación supervisada multidimensional estacionaria, clasificación semisupervisada unidimensional cambiante con el tiempo, y clasificación supervisada multidimensional cambiante con el tiempo. Para llevar a cabo esta tarea, hemos usado básicamente los clasificadores Bayesianos como modelos. La primera contribución, dirigiéndose al problema de clasificación supervisada multidimensional estacionaria, se compone de dos nuevos métodos de aprendizaje de clasificadores Bayesianos multidimensionales a partir de datos estacionarios. Los métodos se proponen desde dos puntos de vista diferentes. El primer método, denominado CB-MBC, se basa en una estrategia de envoltura de selección de variables que es voraz y hacia delante, mientras que el segundo, denominado MB-MBC, es una estrategia de filtrado de variables con una aproximación basada en restricciones y en el manto de Markov. Ambos métodos han sido aplicados a dos problemas reales importantes, a saber, la predicción de los inhibidores de la transcriptasa inversa y de la proteasa para el problema de infección por el virus de la inmunodeficiencia humana tipo 1 (HIV-1), y la predicción del European Quality of Life-5 Dimensions (EQ-5D) a partir de los cuestionarios de la enfermedad de Parkinson con 39 ítems (PDQ-39). El estudio experimental incluye comparaciones de CB-MBC y MB-MBC con los métodos del estado del arte de la clasificación multidimensional, así como con métodos comúnmente utilizados para resolver el problema de predicción de la enfermedad de Parkinson, a saber, la regresión logística multinomial, mínimos cuadrados ordinarios, y mínimas desviaciones absolutas censuradas. En ambas aplicaciones, los resultados han sido prometedores con respecto a la precisión de la clasificación, así como en relación al análisis de las estructuras gráficas que identifican interacciones conocidas y novedosas entre las variables. La segunda contribución, referida al problema de clasificación semi-supervisada unidimensional cambiante con el tiempo, consiste en un método nuevo (CPL-DS) para clasificar flujos de datos parcialmente etiquetados. Los flujos de datos difieren de los conjuntos de datos estacionarios en su proceso de generación muy rápido y en su aspecto de cambio de concepto. Es decir, los conceptos aprendidos y/o la distribución subyacente están probablemente cambiando y evolucionando en el tiempo, lo que hace que el modelo de clasificación actual sea obsoleto y deba ser actualizado. CPL-DS utiliza la divergencia de Kullback-Leibler y el método de bootstrapping para cuantificar y detectar tres tipos posibles de cambio: en las predictoras, en la a posteriori de la clase o en ambas. Después, si se detecta cualquier cambio, un nuevo modelo de clasificación se aprende usando el algoritmo EM; si no, el modelo de clasificación actual se mantiene sin modificaciones. CPL-DS es general, ya que puede ser aplicado a varios modelos de clasificación. Usando dos modelos diferentes, el clasificador naive Bayes y la regresión logística, CPL-DS se ha probado con flujos de datos sintéticos y también se ha aplicado al problema real de la detección de código malware, en el cual los nuevos ficheros recibidos deben ser continuamente clasificados en malware o goodware. Los resultados experimentales muestran que nuestro método es efectivo para la detección de diferentes tipos de cambio a partir de los flujos de datos parcialmente etiquetados y también tiene una buena precisión de la clasificación. Finalmente, la tercera contribución, sobre el problema de clasificación supervisada multidimensional cambiante con el tiempo, consiste en dos métodos adaptativos, a saber, Locally Adpative-MB-MBC (LA-MB-MBC) y Globally Adpative-MB-MBC (GA-MB-MBC). Ambos métodos monitorizan el cambio de concepto a lo largo del tiempo utilizando la log-verosimilitud media como métrica y el test de Page-Hinkley. Luego, si se detecta un cambio de concepto, LA-MB-MBC adapta el actual clasificador Bayesiano multidimensional localmente alrededor de cada nodo cambiado, mientras que GA-MB-MBC aprende un nuevo clasificador Bayesiano multidimensional. El estudio experimental realizado usando flujos de datos sintéticos multidimensionales indica los méritos de los métodos adaptativos propuestos. ABSTRACT Nowadays, with the ongoing and rapid evolution of information technology and computing devices, large volumes of data are continuously collected and stored in different domains and through various real-world applications. Extracting useful knowledge from such a huge amount of data usually cannot be performed manually, and requires the use of adequate machine learning and data mining techniques. Classification is one of the most important techniques that has been successfully applied to several areas. Roughly speaking, classification consists of two main steps: first, learn a classification model or classifier from an available training data, and secondly, classify the new incoming unseen data instances using the learned classifier. Classification is supervised when the whole class values are present in the training data (i.e., fully labeled data), semi-supervised when only some class values are known (i.e., partially labeled data), and unsupervised when the whole class values are missing in the training data (i.e., unlabeled data). In addition, besides this taxonomy, the classification problem can be categorized into uni-dimensional or multi-dimensional depending on the number of class variables, one or more, respectively; or can be also categorized into stationary or streaming depending on the characteristics of the data and the rate of change underlying it. Through this thesis, we deal with the classification problem under three different settings, namely, supervised multi-dimensional stationary classification, semi-supervised unidimensional streaming classification, and supervised multi-dimensional streaming classification. To accomplish this task, we basically used Bayesian network classifiers as models. The first contribution, addressing the supervised multi-dimensional stationary classification problem, consists of two new methods for learning multi-dimensional Bayesian network classifiers from stationary data. They are proposed from two different points of view. The first method, named CB-MBC, is based on a wrapper greedy forward selection approach, while the second one, named MB-MBC, is a filter constraint-based approach based on Markov blankets. Both methods are applied to two important real-world problems, namely, the prediction of the human immunodeficiency virus type 1 (HIV-1) reverse transcriptase and protease inhibitors, and the prediction of the European Quality of Life-5 Dimensions (EQ-5D) from 39-item Parkinson’s Disease Questionnaire (PDQ-39). The experimental study includes comparisons of CB-MBC and MB-MBC against state-of-the-art multi-dimensional classification methods, as well as against commonly used methods for solving the Parkinson’s disease prediction problem, namely, multinomial logistic regression, ordinary least squares, and censored least absolute deviations. For both considered case studies, results are promising in terms of classification accuracy as well as regarding the analysis of the learned MBC graphical structures identifying known and novel interactions among variables. The second contribution, addressing the semi-supervised uni-dimensional streaming classification problem, consists of a novel method (CPL-DS) for classifying partially labeled data streams. Data streams differ from the stationary data sets by their highly rapid generation process and their concept-drifting aspect. That is, the learned concepts and/or the underlying distribution are likely changing and evolving over time, which makes the current classification model out-of-date requiring to be updated. CPL-DS uses the Kullback-Leibler divergence and bootstrapping method to quantify and detect three possible kinds of drift: feature, conditional or dual. Then, if any occurs, a new classification model is learned using the expectation-maximization algorithm; otherwise, the current classification model is kept unchanged. CPL-DS is general as it can be applied to several classification models. Using two different models, namely, naive Bayes classifier and logistic regression, CPL-DS is tested with synthetic data streams and applied to the real-world problem of malware detection, where the new received files should be continuously classified into malware or goodware. Experimental results show that our approach is effective for detecting different kinds of drift from partially labeled data streams, as well as having a good classification performance. Finally, the third contribution, addressing the supervised multi-dimensional streaming classification problem, consists of two adaptive methods, namely, Locally Adaptive-MB-MBC (LA-MB-MBC) and Globally Adaptive-MB-MBC (GA-MB-MBC). Both methods monitor the concept drift over time using the average log-likelihood score and the Page-Hinkley test. Then, if a drift is detected, LA-MB-MBC adapts the current multi-dimensional Bayesian network classifier locally around each changed node, whereas GA-MB-MBC learns a new multi-dimensional Bayesian network classifier from scratch. Experimental study carried out using synthetic multi-dimensional data streams shows the merits of both proposed adaptive methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes the preliminary results of an intercomparison of spectroradiometers for global (GNI) and direct normal incidence (DNI) irradiance in the visible (VIS) and near infrared (NIR) spectral regions together with an assessment of the impact these results may have on the calibration of triple-junction photovoltaic devices and on the relevant spectral mismatch calculation. The intercomparison was conducted by six European scientific laboratories and a Japanese industrial partner. Seven institutions and seven spectroradiometer systems, representing different technologies and manufacturers were involved, representing a good cross section of the todays available instrumentation for solar spectrum measurements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Planning and Comunity Development: Case Studies, presents the findings of the inter-university Seminar held on 28?29 July 2011 and organized by researchers from the Technical University of Madrid and the University of California, Berkeley, who were fortunate to have the presence of the renowned Professor John Friedmann. Professors, researchers and PhD students from our research groups presented their works as scientific communications that were enriched by the debate among the different researches who attended the Seminar. All of them appear in the picture below in front of the gate of Haviland Hall at UC Berkeley. This book analyses the concept of planning and its evolution so far, leading to the conceptualization of governance as an expression of the planning practice. It also studies the role of social capital and cooperation as tools for the community development. The conceptual analysis is complemented by the development of six case studies that put forward experiences of planning and community development carried out in diverse social and cultural contexts of Latin-America, Europe and North America. This publication comes after more than 20 years of work of the researchers that met at the seminar. Through their work in managing development initiatives, they have learned lessons and have contribute to shape their own body of teaching that develops and analyses the role of planning in public domain to promote community development. This knowledge is synthesized in the model Planning as Working With People, that shows that development is not effective unless is promoted in continuous collaboration with all the actors involved in the process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Although photovoltaic (PV) systems have become much more competitive, the diffusion of PV systems still remains low in comparison to conventional energy sources. What are the current barriers hindering the diffusion of PV systems? In order to address this, we conducted an extensive and systematic literature review based on the Web of Science database. Our state-of-the-art review shows that, despite the rapid development and maturity of the technology during the past few years, the adoption of PV systems still faces several barriers. The wide adoption of PV systems-either as a substitute for other electricity power generation systems in urban areas or for rural electrification-is a challenging process. Our results show that the barriers are evident for both low- and high-income economies, encompassing four dimensions: sociotechnical, management, economic, and policy. Although the barriers vary across context, the lessons learned from one study can be valuable to others. The involvement of all stakeholders-adopters, local communities, firms, international organizations, financial institutions, and government-is crucial to foster the adoption.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A sexualidade de Lea e Raquel, o útero, as mandrágoras e o corpo de Jacó são fatores que definem o alicerce do nosso texto como espaços de diálogo, mediação e estrutura do cenário. O destaque principal está sob o capítulo 30.14-16 que retrata a memória das mandrágoras. Como plantas místicas elas dominam o campo religioso e como plantas medicinais elas são utilizadas para solucionar problemas biológicos. As instituições e sociedades detentoras de uma ideologia e de leis que regulamentam uma existência apresentam na narrativa, duas irmãs, mas também esposas de um mesmo homem que, manipuladas por essa instituição que minimiza e oprime a mulher, principalmente a estéril, confina-as como simples objeto de sexualidade e mantenedoras da descendência por meio da maternidade. A memória das mandrágoras é sinal de que a prática existente circundava uma religião não monoteísta. Ela existia sociologicamente por meio de sincretismos, força e poderes sócio-culturais e religiosos. Era constituída das memórias de mulheres que manipulavam e dominavam o poder sagrado para controle de suas necessidades. O discurso dessas mulheres, em nossa unidade, prova que o discurso dessa narrativa não se encontra somente no plano individual, mas também se estende a nível comunitário, espaço que as define e lhes concede importância por meio do casamento e dádivas da maternidade como continuidade da descendência. São mulheres que dominaram um espaço na história com suas lutas e vitórias, com atos de amor e de sofrimento, de crenças e poderes numa experiência religiosa dominada pelo masculino que vai além do nosso conhecimento atual. As lutas firmadas na fé e na ideologia dessas mulheres definiram e acentuaram seu papel de protagonistas nas narrativas 9 bíblicas que estudamos no Gênesis. A conservação dessas narrativas, e do espaço teológico da época, definiu espaços, vidas, gerações e tribos que determinaram as gerações prometidas e fecharam um ciclo: o da promessa de Iahweh quanto à descendência desde Abraão. Os mitos e as crenças foram extintos para dar espaço a uma fé monoteísta, mas a experiência religiosa

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A cross-maze task that can be acquired through either place or response learning was used to examine the hypothesis that posttraining neurochemical manipulation of the hippocampus or caudate-putamen can bias an animal toward the use of a specific memory system. Male Long-Evans rats received four trials per day for 7 days, a probe trial on day 8, further training on days 9–15, and an additional probe trial on day 16. Training occurred in a cross-maze task in which rats started from a consistent start-box (south), and obtained food from a consistent goal-arm (west). On days 4–6 of training, rats received posttraining intrahippocampal (1 μg/0.5 μl) or intracaudate (2 μg/0.5 μl) injections of either glutamate or saline (0.5 μl). On days 8 and 16, a probe trial was given in which rats were placed in a novel start-box (north). Rats selecting the west goal-arm were designated “place” learners, and those selecting the east goal-arm were designated “response” learners. Saline-treated rats predominantly displayed place learning on day 8 and response learning on day 16, indicating a shift in control of learned behavior with extended training. Rats receiving intrahippocampal injections of glutamate predominantly displayed place learning on days 8 and 16, indicating that manipulation of the hippocampus produced a blockade of the shift to response learning. Rats receiving intracaudate injections of glutamate displayed response learning on days 8 and 16, indicating an accelerated shift to response learning. The findings suggest that posttraining intracerebral glutamate infusions can (i) modulate the distinct memory processes mediated by the hippocampus and caudate-putamen and (ii) bias the brain toward the use of a specific memory system to control learned behavior and thereby influence the timing of the switch from the use of cognitive memory to habit learning to guide behavior.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Acknowledgements We are grateful to Elaine O’Mahony, Imogen Pearce, Richard Comont, Anthony McCluskey and other BBCT staff for the many hours of BeeWatch species identification and for all people who submitted sightings to BeeWatch, OPAL, BWARS and the various local recording schemes and societies. We thank the NBN for allowing us to download the bumblebee records without strings attached, and the Essex, Greater London, Cumbria and Sussex based recording centres for providing records upon request. Finally, we are indebted to Tom August and two anonymous reviewers for their valuable critique on an earlier version of this work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this review article, we explore several recent advances in the quantitative modeling of financial markets. We begin with the Efficient Markets Hypothesis and describe how this controversial idea has stimulated a number of new directions of research, some focusing on more elaborate mathematical models that are capable of rationalizing the empirical facts, others taking a completely different tack in rejecting rationality altogether. One of the most promising directions is to view financial markets from a biological perspective and, specifically, within an evolutionary framework in which markets, instruments, institutions, and investors interact and evolve dynamically according to the “law” of economic selection. Under this view, financial agents compete and adapt, but they do not necessarily do so in an optimal fashion. Evolutionary and ecological models of financial markets is truly a new frontier whose exploration has just begun.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Extrastriate visual cortex of the ventral-posterior suprasylvian gyrus (vPS cortex) of freely behaving cats was reversibly deactivated with cooling to determine its role in performance on a battery of simple or masked two-dimensional pattern discriminations, and three-dimensional object discriminations. Deactivation of vPS cortex by cooling profoundly impaired the ability of the cats to recall the difference between all previously learned pattern and object discriminations. However, the cats' ability to learn or relearn pattern and object discriminations while vPS was deactivated depended upon the nature of the pattern or object and the cats' prior level of exposure to them. During cooling of vPS cortex, the cats could neither learn the novel object discriminations nor relearn a highly familiar masked or partially occluded pattern discrimination, although they could relearn both the highly familiar object and simple pattern discriminations. These cooling-induced deficits resemble those induced by cooling of the topologically equivalent inferotemporal cortex of monkeys and provides evidence that the equivalent regions contribute to visual processing in similar ways.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Online education is no longer a trend, rather it is mainstream. In the Fall of 2012, 69.1% of chief academic leaders indicated online learning was critical to their long-term strategy and of the 20.6 million students enrolled in higher education, 6.7 million were enrolled in an online course (Allen & Seaman, 2013; United States Department of Education, 2013). The advent of online education and its rapid growth has forced academic institutions and faculty to question the current styles and techniques for teaching and learning. As developments in educational technology continue to advance, the ways in which we deliver and receive knowledge in both the traditional and online classrooms will further evolve. It is necessary to investigate and understand the progression and advancements in educational technology and the variety of methods used to deliver knowledge to improve the quality of education we provide today and motivate, inspire, and educate the students of the 21st century. This paper explores the atioevolution of distance education beginning with correspondence and the use of parcel post, to radio, then to television, and finally to online education.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

[Introduction.] Over the last two years, not only inside but also outside the framework of the EU treaties, far reaching measures have been taken at the highest political level in order to address the financial and economic crisis in Europe and in particular the sovereign debt crisis in the Euro area. This has triggered debates forecasting the “renationalisation of European politics.” Herman Van Rompuy, the President of the European Council, countered the prediction that Europe is doomed because of such a renationalisation: “If national politics have a prominent place in our Union, why would this not strengthen it?” He took the view that not a renationalisation of European politics was at stake, but an Europeanization of national politics emphasising that post war Europe was never developed in contradiction with nation states.1 Indeed, the European project is based on a mobilisation of bundled, national forces which are of vital importance to a democratically structured and robust Union that is capable of acting in a globalised world. To that end, the Treaty of Lisbon created a legal basis. The new legal framework redefines the balance between the Union institutions and confirms the central role of the Community method in the EU legislative and judiciary process. This contribution critically discusses the development of the EU's institutional balance after the entry into force of the Treaty of Lisbon, with a particular emphasis on the use of the Community Method and the current interplay between national constitutional courts and the Court of Justice. This interplay has to date been characterised by suspicion and mistrust, rather than by a genuine dialogue between the pertinent judicial actors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Globalisation has led to new health challenges for the 21st Century. These challenges have transnational implications and involve a large range of actors and stakeholders. National governments no longer hold the sole responsibility for the health of their people. These changes in health trends have led to the rise of Global Health Governance as a theoretical notion for health policy-making. The Southeast Asian region is particularly prone to public health threats and it is for this reason that this brief looks at the potential of the Association of Southeast Asian Nations (ASEAN) as a regional organisation to take a lead in health cooperation. Through a comparative study between the regional mechanisms for health cooperation of the European Union (EU) and ASEAN, we look at how ASEAN could maximise its potential as a global health actor. Regional institutions and a network of civil society organisations are crucial in relaying global initiatives for health, and ensuring their effective implementation at the national level. While the EU benefits from higher degrees of integration and involvement in the sector of health policy making, ASEAN’s role as a regional body for health governance will depend both on greater horizontal and vertical regional integration through enhanced regional mechanisms and a wider matrix of cooperation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To become a prosperous country devoid of institutional preconditions for corruption, Croatia will have to define its own goals, persevere in reaching them and introduce some sort of internal monitoring. True political will, democratisation, government accountability and appropriate policies are crucial, particularly for the institutions and mechanisms that monitor government accountability and citizen participation. One can only reiterate the European Commission’s hope that membership will prove to be an additional incentive to Croatia’s politicians to change their behaviour and start addressing state capture in the country.