900 resultados para Web 2.0 Applications in Education
Resumo:
Linked Data assets (RDF triples, graphs, datasets, mappings...) can be object of protection by the intellectual property law, the database law or its access or publication be restricted by other legal reasons (personal data pro- tection, security reasons, etc.). Publishing a rights expression along with the digital asset, allows the rightsholder waiving some or all of the IP and database rights (leaving the work in the public domain), permitting some operations if certain conditions are satisfied (like giving attribution to the author) or simply reminding the audience that some rights are reserved.
Resumo:
Recent commentaries have proposed the advantages of using open exchange of data and informatics resources for improving health-related policies and patient care in Africa. Yet, in many African regions, both private medical and public health information systems are still unaffordable. Open exchange over the social Web 2.0 could encourage more altruistic support of medical initiatives. We have carried out some experiments to demonstrate the feasibility of using this approach to disseminate open data and informatics resources in Africa. After the experiments we developed the AFRICA BUILD Portal, the first Social Network for African biomedical researchers. Through the AFRICA BUILD Portal users can access in a transparent way to several resources. Currently, over 600 researchers are using distributed and open resources through this platform committed to low connections.
Resumo:
The conception of IoT (Internet of Things) is accepted as the future tendency of Internet among academia and industry. It will enable people and things to be connected at anytime and anyplace, with anything and anyone. IoT has been proposed to be applied into many areas such as Healthcare, Transportation,Logistics, and Smart environment etc. However, this thesis emphasizes on the home healthcare area as it is the potential healthcare model to solve many problems such as the limited medical resources, the increasing demands for healthcare from elderly and chronic patients which the traditional model is not capable of. A remarkable change in IoT in semantic oriented vision is that vast sensors or devices are involved which could generate enormous data. Methods to manage the data including acquiring, interpreting, processing and storing data need to be implemented. Apart from this, other abilities that IoT is not capable of are concluded, namely, interoperation, context awareness and security & privacy. Context awareness is an emerging technology to manage and take advantage of context to enable any type of system to provide personalized services. The aim of this thesis is to explore ways to facilitate context awareness in IoT. In order to realize this objective, a preliminary research is carried out in this thesis. The most basic premise to realize context awareness is to collect, model, understand, reason and make use of context. A complete literature review for the existing context modelling and context reasoning techniques is conducted. The conclusion is that the ontology-based context modelling and ontology-based context reasoning are the most promising and efficient techniques to manage context. In order to fuse ontology into IoT, a specific ontology-based context awareness framework is proposed for IoT applications. In general, the framework is composed of eight components which are hardware, UI (User Interface), Context modelling, Context fusion, Context reasoning, Context repository, Security unit and Context dissemination. Moreover, on the basis of TOVE (Toronto Virtual Enterprise), a formal ontology developing methodology is proposed and illustrated which consists of four stages: Specification & Conceptualization, Competency Formulation, Implementation and Validation & Documentation. In addition, a home healthcare scenario is elaborated by listing its well-defined functionalities. Aiming at representing this specific scenario, the proposed ontology developing methodology is applied and the ontology-based model is developed in a free and open-source ontology editor called Protégé. Finally, the accuracy and completeness of the proposed ontology are validated to show that this proposed ontology is able to accurately represent the scenario of interest.
Resumo:
El mundo de la web admite actualmente los productos desarrollados tanto por desarrolladores profesionales como por usuarios finales con un conocimiento más limitado. A pesar de la diferencia que se puede suponer de calidad entre los productos de ambos, las dos soluciones pueden ser reconocidas y empleadas en una aplicación. En la Web 2.0, este comportamiento se observa en el desarrollo de componentes web. Lo que se persigue en el trabajo es desarrollar un modelo de persistencia que, apoyado por un lado servidor y por uno cliente, recoja las métricas de calidad de los componentes cuando los usuarios interaccionan con ellos. A partir de estas métricas, es posible mejorar la calidad de estos componentes. La forma en la que se van a recoger las métricas es a través de PicBit, la aplicación desarrollada para que los usuarios puedan interconectar diferentes componentes entre ellos sin restricciones, de forma que tras interactuar con ellos puedan expresar su grado de satisfacción, que se recoge para la evaluación de la calidad. Se definen también unas métricas intrínsecas al componente, no determinadas por el usuario y que sirven como referencia de la evaluación. Cuando se tienen tanto las métricas intrínsecas como procedentes del usuario, se realiza una correlación entre ellas que permite analizar las posibles desviaciones entre ellas y determinar la calidad propia del componente. Las conclusiones que se pueden obtener del trabajo es que cuando los usuarios pueden realizar pruebas de usabilidad de forma libre, sin restricciones, es mayor la posibilidad de obtener resultados favorables porque estos resultados muestran cómo usará un usuario final la aplicación. Este método de trabajo se ve favorecido por el número de herramientas que se pueden utilizar hoy para monitorizar el flujo de usuario en el servicio.---ABSTRACT---Nowadays, the web world deals with products developed both by professional developers and by end-users with some limited knowledge. Although the difference between both can be important in quality terms, both are accepted and included in web applications. In web 2.0, this behavior can be recognized in the web components development. The goal pursued in the work presented is to create a persistent model that, supported by an end and a back side, will pick the quality measures of the components when the users interact with them. These measures are the starting point for improving the components. The way in which the measures are going to be picked is through PicBit, the application we have developed in order to allow the users playing with the components without restrictions or rules, so after the interaction they can give their satisfaction mark with the application. This will be the value used to evaluate the quality. Some own measures are also defined, which does not depend on the user and which will be used as a reference point of the evaluation. When the measures from users and own ones are got, their correlation is analyzed to study the differences between them and to establish the quality of the component. The conclusion that can be gained from the project is the importance of giving freedom for users when doing usability tests because it increases the chance to get positive results, in the way the users execute the operations they want with the application. This method is fortunate for having such a number of tools to monitor the user flow when using the service.
Resumo:
La web vive un proceso de cambio constante, basado en una interacción mayor del usuario. A partir de la actual corriente de paradigmas y tecnologías asociadas a la web 2.0, han surgido una serie de estándares de gran utilidad, que cubre la necesidad de los desarrollos actuales de la red. Entre estos se incluyen los componentes web, etiquetas HTML definidas por el usuario que cubren una función concreta dentro de una página. Existe una necesidad de medir la calidad de dichos desarrollos, para discernir si el concepto de componente web supone un cambio revolucionario en el desarrollo de la web 2.0. Para ello, es necesario realizar una explotación de componentes web, considerada como la medición de calidad basada en métricas y definición de un modelo de interconexión de componentes. La plataforma PicBit surge como respuesta a estas cuestiones. Consiste en una plataforma social de construcción de perfiles basada en estos elementos. Desde la perspectiva del usuario final se trata de una herramienta para crear perfiles y comunidades sociales, mientras que desde una perspectiva académica, la plataforma consiste en un entorno de pruebas o sandbox de componentes web. Para ello, será necesario implementar el extremo servidor de dicha plataforma, enfocado a la labor de explotación, por medio de la definición de una interfaz REST de operaciones y un sistema para la recolección de eventos de usuario en la plataforma. Gracias a esta plataforma se podrán discernir qué parámetros influyen positivamente en la experiencia de uso de un componente, así como descubrir el futuro potencial de este tipo de desarrollos.---ABSTRACT---The web evolves into a more interactive platform. From the actual version of the web, named as web 2.0, many paradigms and standards have arisen. One of those standards is web components, a set of concepts to define new HTML tags that covers a specific function inside a web page. It is necessary to measure the quality of this kind of software development, and the aim behind this approach is to determine if this new set of concepts would survive in the actual web paradigm. To achieve this, it is described a model to analyse components, in the terms of quality measure and interconnection model description. PicBit consists of a social platform to use web components. From the point of view of the final user, this platform is a tool to create social profiles using components, whereas from the point of view of technicians, it consists of a sandbox of web components. Thanks to this platform, we will be able to discover those parameters that have a positive effect in the user experience and to discover the potential of this new set of standards into the web 2.0.
Resumo:
Esta tesis establece los fundamentos teóricos y diseña una colección abierta de clases C++ denominada VBF (Vector Boolean Functions) para analizar funciones booleanas vectoriales (funciones que asocian un vector booleano a otro vector booleano) desde una perspectiva criptográfica. Esta nueva implementación emplea la librería NTL de Victor Shoup, incorporando nuevos módulos que complementan a las funciones de NTL, adecuándolas para el análisis criptográfico. La clase fundamental que representa una función booleana vectorial se puede inicializar de manera muy flexible mediante diferentes estructuras de datas tales como la Tabla de verdad, la Representación de traza y la Forma algebraica normal entre otras. De esta manera VBF permite evaluar los criterios criptográficos más relevantes de los algoritmos de cifra en bloque y de stream, así como funciones hash: por ejemplo, proporciona la no-linealidad, la distancia lineal, el grado algebraico, las estructuras lineales, la distribución de frecuencias de los valores absolutos del espectro Walsh o del espectro de autocorrelación, entre otros criterios. Adicionalmente, VBF puede llevar a cabo operaciones entre funciones booleanas vectoriales tales como la comprobación de igualdad, la composición, la inversión, la suma, la suma directa, el bricklayering (aplicación paralela de funciones booleanas vectoriales como la empleada en el algoritmo de cifra Rijndael), y la adición de funciones coordenada. La tesis también muestra el empleo de la librería VBF en dos aplicaciones prácticas. Por un lado, se han analizado las características más relevantes de los sistemas de cifra en bloque. Por otro lado, combinando VBF con algoritmos de optimización, se han diseñado funciones booleanas cuyas propiedades criptográficas son las mejores conocidas hasta la fecha. ABSTRACT This thesis develops the theoretical foundations and designs an open collection of C++ classes, called VBF, designed for analyzing vector Boolean functions (functions that map a Boolean vector to another Boolean vector) from a cryptographic perspective. This new implementation uses the NTL library from Victor Shoup, adding new modules which complement the existing ones making VBF better suited for cryptography. The fundamental class representing a vector Boolean function can be initialized in a flexible way via several alternative types of data structures such as Truth Table, Trace Representation, Algebraic Normal Form (ANF) among others. This way, VBF allows the evaluation of the most relevant cryptographic criteria for block and stream ciphers as well as for hash functions: for instance, it provides the nonlinearity, the linearity distance, the algebraic degree, the linear structures, the frequency distribution of the absolute values of the Walsh Spectrum or the Autocorrelation Spectrum, among others. In addition, VBF can perform operations such as equality testing, composition, inversion, sum, direct sum, bricklayering (parallel application of vector Boolean functions as employed in Rijndael cipher), and adding coordinate functions of two vector Boolean functions. This thesis also illustrates the use of VBF in two practical applications. On the one hand, the most relevant properties of the existing block ciphers have been analysed. On the other hand, by combining VBF with optimization algorithms, new Boolean functions have been designed which have the best known cryptographic properties up-to-date.
Resumo:
La tesis estudia uno de los aspectos más importantes de la gestión de la sociedad de la información: conocer la manera en que una persona valora cualquier situación. Esto es importante para el individuo que realiza la valoración y para el entorno con el que se relaciona. La valoración es el resultado de la comparación: se asignan los mismos valores a alternativas similares y mayores valores a alternativas mejor consideradas en el proceso de comparación. Los patrones que guían al individuo a la hora de hacer la comparación se derivan de sus preferencias individuales (es decir, de sus opiniones). En la tesis se presentan varios procedimientos para establecer las relaciones de preferencia entre alternativas de una persona. La valoración progresa hasta obtener una representación numérica de sus preferencias. Cuando la representación de preferencias es homogénea permite, además, contrastar las preferencias personales con las del resto de evaluadores, lo que favorece la evaluación de políticas, la transferencia de información entre diferentes individuos y el diseño de la alternativa que mejor se adapte a las preferencias identificadas. Al mismo tiempo, con esta información se pueden construir comunidades de personas con los mismos sistemas de preferencias ante una cuestión concreta. La tesis muestra un caso de aplicación de esta metodología: optimización de las políticas laborales en un mercado real. Para apoyar a los demandantes de empleo (en su iniciación o reincorporación al mundo laboral o en el cambio de su actividad) es necesario conocer sus preferencias respecto a las ocupaciones que están dispuestos a desempeñar. Además, para que la intermediación laboral sea efectiva, las ocupaciones buscadas deben de ser ofrecidas por el mercado de trabajo y el demandante debe reunir las condiciones para acceder a esas ocupaciones. El siguiente desarrollo de estos modelos nos lleva a los procedimientos utilizados para transformar múltiples preferencias en una decisión agregada y que consideran tanto la opinión de cada uno de los individuos que participan en la decisión como las interacciones sociales, todo ello dirigido a generar una solución que se ajuste lo mejor posible al punto de vista de toda la población. Las decisiones con múltiples participantes inciden, principalmente, en: el aumento del alcance para incorporar a personas que tradicionalmente no han sido consideradas en las tomas de decisiones, la agregación de las preferencias de los múltiples participantes en las tomas de decisiones colectivas (mediante votación, utilizando aplicaciones desarrolladas para la Web2.0 y a través de comparaciones interpersonales de utilidad) y, finalmente, la auto-organización para permitir que interaccionen entre si los participantes en la valoración, de forma que hagan que el resultado final sea mejor que la mera agregación de opiniones individuales. La tesis analiza los sistemas de e-democracia o herramientas para su implantación que tienen más más utilización en la actualidad o son más avanzados. Están muy relacionados con la web 2.0 y su implantación está suponiendo una evolución de la democracia actual. También se estudian aplicaciones de software de Colaboración en la toma de decisiones (Collaborative decision-making (CDM)) que ayudan a dar sentido y significado a los datos. Pretenden coordinar las funciones y características necesarias para llegar a decisiones colectivas oportunas, lo que permite a todos los interesados participar en el proceso. La tesis finaliza con la presentación de un nuevo modelo o paradigma en la toma de decisiones con múltiples participantes. El desarrollo se apoya en el cálculo de las funciones de utilidad empática. Busca la colaboración entre los individuos para que la toma de decisiones sea más efectiva, además pretende aumentar el número de personas implicadas. Estudia las interacciones y la retroalimentación entre ciudadanos, ya que la influencia de unos ciudadanos en los otros es fundamental para los procesos de toma de decisiones colectivas y de e-democracia. También incluye métodos para detectar cuando se ha estancado el proceso y se debe interrumpir. Este modelo se aplica a la consulta de los ciudadanos de un municipio sobre la oportunidad de implantar carriles-bici y las características que deben tomar. Se simula la votación e interacción entre los votantes. ABSTRACT The thesis examines one of the most important aspects of the management of the information society: get to know how a person values every situation. This is important for the individual performing the assessment and for the environment with which he interacts. The assessment is a result of the comparison: identical values are allocated to similar alternatives and higher values are assigned to those alternatives that are more favorably considered in the comparison process. Patterns that guide the individual in making the comparison are derived from his individual preferences (ie, his opinions). In the thesis several procedures to establish preference relations between alternatives a person are presented. The assessment progresses to obtain a numerical representation of his preferences. When the representation of preferences is homogeneous, it also allows the personal preferences of each individual to be compared with those of other evaluators, favoring policy evaluation, the transfer of information between different individuals and design the alternative that best suits the identified preferences. At the same time, with this information you can build communities of people with similar systems of preferences referred to a particular issue. The thesis shows a case of application of this methodology: optimization of labour policies in a real market. To be able support jobseekers (in their initiation or reinstatement to employment or when changing area of professional activity) is necessary to know their preferences for jobs that he is willing to perform. In addition, for labour mediation to be effective occupations that are sought must be offered by the labour market and the applicant must meet the conditions for access to these occupations. Further development of these models leads us to the procedures used to transform multiple preferences in an aggregate decision and consider both the views of each of the individuals involved in the decision and the social interactions, all aimed at generating a solution that best fits of the point of view of the entire population. Decisions with multiple participants mainly focus on: increasing the scope to include people who traditionally have not been considered in decision making, aggregation of the preferences of multiple participants in collective decision making (by vote, using applications developed for the Web 2.0 and through interpersonal comparisons of utility) and, finally, self-organization to allow participants to interact with each other in the assessment, so that the final result is better than the mere aggregation of individual opinions. The thesis analyzes the systems of e-democracy or tools for implementation which are more popular or more advanced. They are closely related to the Web 2.0 and its implementation is bringing an evolution of the current way of understanding democracy. We have also studied Collaborative Decision-Making (CDM)) software applications in decision-making which help to give sense and meaning to the data. They intend to coordinate the functions and features needed to reach adequate collective decisions, allowing all stakeholders to participate in the process. The thesis concludes with the presentation of a new model or paradigm in decision-making with multiple participants. The development is based on the calculation of the empathic utility functions. It seeks collaboration between individuals to make decision-making more effective; it also aims to increase the number of people involved. It studies the interactions and feedback among citizens, because the influence of some citizens in the other is fundamental to the process of collective decision-making and e-democracy. It also includes methods for detecting when the process has stalled and should be discontinued. This model is applied to the consultation of the citizens of a municipality on the opportunity to introduce bike lanes and characteristics they should have. Voting and interaction among voters is simulated.
Resumo:
Las distintas encuestas realizadas sobre la percepción social de la ciencia indican que un alto porcentaje de jóvenes no tienen interés por estas materias. Es muy importante la alfabetización científica de la población ya que ello permite tomar decisiones tanto para la vida cotidiana como sobre distintos problemas que afectan al conjunto de la sociedad. Una sociedad con más conocimiento científico--‐tecnológico lleva asociado un mayor crecimiento económico, pero desafortunadamente nos encontramos con una disminución del número de alumnos que elige estudios de Ciencia y Tecnología. Este desinterés viene influenciado tanto por factores internos al estudiante como por externos. Desde la posición docente podemos actuar sobre ellos promoviendo actividades relacionadas con la divulgación científico--‐tecnológica en clase haciendo uso de medios de comunicación que ofrezcan información procedente de fuentes fiables (prensa escrita, radio, televisión y cibermedios/web 2.0). La propuesta metodológica planteada aquí ayudará a ampliar la alfabetización científico--‐tecnológica a través de la exploración del trabajo de los investigadores actuales y de las biografías de los científicos de otras épocas. Además los alumnos crearán de forma colaborativa material propio para divulgar la Ciencia y la Tecnología en el aula. La propuesta se sitúa en el curso 1º de ESO, pero con perspectivas de extenderla en cursos sucesivos al resto de secundaria y bachillerato adaptando los contenidos . Con este trabajo pretendemos contribuir a incrementar el interés por la Ciencia y la Tecnología en el alumnado de secundaria y bachillerato. ABSTRACT The different surveys conducted in order to analyse the social perception of science show that a high percentage of young people is not interested in these subjects. The level of scientific literacy in the population is very important because it allows people to make their own decisions not only for life management skills but also for coping with the diverse problems that may affect the whole society. A society with more scientific and technological knowledge has attached a higher economic growth, but unfortunately there is a reduction in the number of students that choose Science and Tech studies. This lack of interest is influenced by the student’s internal factors as well as by the external ones. From the teacher’s point of view, it is possible to act on them, promoting activities related to scientific and technological broadcasting in the class, using mass media which can provide information from reliable sources (press, radio, television and media on the Internet/web 2.0) The methodological proposal that is made here will help to broaden scientific literacy through exploring the current researcher’s studies and the former scientifics’ biographies. Students will also create their own material in a collaborative way in order to divulgate science and technology in the classroom. The proposal is set for the first grade of secondary education, but we have in prospect to spread it during the following school years to the rest of secondary education grades and A levels only adapting its content. With this essay we want to contribute to the increasing of the interest in science and technology in high school and A levels students.
Resumo:
Vivemos um período de transformações políticas, econômicas, sociais e culturais que, a todo instante, nos impõe desafios. Neste contexto, nas últimas décadas, o uso da tecnologia tem sido ampliado na realização de diversas atividades cotidianas, na divulgação de informações, na comunicação, como forma de expressão e organização da sociedade. A escola, enquanto instituição social, precisa reconhecer esta nova realidade, esta diferente possibilidade de aquisição e transformação de saber, para que possa intervir, ressignificar e redirecionar sua ação, a fim de atender as demandas de seu tempo. O objetivo geral desta pesquisa, a partir da apresentação e análise de experiências realizadas com o uso de Tecnologias da Informação e Conhecimento, é o de refletir sobre como inserir estas ferramentas no processo de ensinar e aprender na escola a partir da visão de professores e alunos, visando a formação integral do educando. Deste modo, no desenvolvimento, entendemos como necessário conhecer e considerar o contexto histórico, bem como as perspectivas relacionadas a escola e seus protagonistas (professores e estudantes) na chamada Sociedade da Informação e do Conhecimento. Ressaltamos a importância do docente (sua formação) e seu papel de mediador nos processos de aprendizagem, assim como a recepção à tecnologia, observando função e espaço de atuação desta. Destacamos experiências com a utilização de TDIC, realizada por professores e alunos, como a produção de game, revistas científicas, escrita de histórias, produções artísticas, blogs, vlogs, discussões em grupos presentes em redes sociais. A metodologia utilizada nesta pesquisa é qualitativa, na modalidade de pesquisa-ação e narrativa, em função do envolvimento com o grupo e com as atividades desenvolvidas, nas quais os participantes compartilham com o pesquisador suas histórias pessoais e de aprendizagem relacionadas às ações ou às atividades que realiza, fornecendo informações e indícios relevantes sobre o seu processo de formação ao longo do tempo. A revisão de literatura foi realizada por meio de análise bibliográfica e documental em livros, teses, dissertações, periódicos específicos sobre o assunto, além de artigos publicados na Internet. A coleta de dados foi realizada a partir de conversas informais, entrevistas semiestruturadas e filmagem dos relatos. A análise foi realizada a partir da abordagem hermenêutico-fenomenológica, que busca descrever e interpretar fenômenos da experiência humana, a fim de investigar a essência por meio da identificação de temas. Os resultados apontam para a necessidade e possibilidade da ampliação da utilização de TDIC como recurso no processo de ensino e aprendizagem, por meio de formação, diálogo, interação, intencionalidade, expectativas, esperança e seus desdobramentos.
Resumo:
The 2.0-Å resolution x-ray crystal structure of a novel trimeric antibody fragment, a “triabody,” has been determined. The trimer is made up of polypeptides constructed in a manner identical to that previously described for some “diabodies”: a VL domain directly fused to the C terminus of a VH domain—i.e., without any linker sequence. The trimer has three Fv heads with the polypeptides arranged in a cyclic, head-to-tail fashion. For the particular structure reported here, the polypeptide was constructed with a VH domain from one antibody fused to the VL domain from an unrelated antibody giving rise to “combinatorial” Fvs upon formation of the trimer. The structure shows that the exchange of the VL domain from antibody B1-8, a Vλ domain, with the VL domain from antibody NQ11, a Vκ domain, leads to a dramatic conformational change in the VH CDR3 loop of antibody B1-8. The magnitude of this change is similar to the largest of the conformational changes observed in antibody fragments in response to antigen binding. Combinatorial pairing of VH and VL domains constitutes a major component of antibody diversity. Conformationally flexible antigen-binding sites capable of adapting to the specific CDR3 loop context created upon VH–VL pairing may be employed by the immune system to maximize the structural diversity of the immune response.
Resumo:
We tested whether severe congestive heart failure (CHF), a condition associated with excess free-water retention, is accompanied by altered regulation of the vasopressin-regulated water channel, aquaporin-2 (AQP2), in the renal collecting duct. CHF was induced by left coronary artery ligation. Compared with sham-operated animals, rats with CHF had severe heart failure with elevated left ventricular end-diastolic pressures (LVEDP): 26.9 ± 3.4 vs. 4.1 ± 0.3 mmHg, and reduced plasma sodium concentrations (142.2 ± 1.6 vs. 149.1 ± 1.1 mEq/liter). Quantitative immunoblotting of total kidney membrane fractions revealed a significant increase in AQP2 expression in animals with CHF (267 ± 53%, n = 12) relative to sham-operated controls (100 ± 13%, n = 14). In contrast, immunoblotting demonstrated a lack of an increase in expression of AQP1 and AQP3 water channel expression, indicating that the effect on AQP2 was selective. Furthermore, postinfarction animals without LVEDP elevation or plasma Na reduction showed no increase in AQP2 expression (121 ± 28% of sham levels, n = 6). Immunocytochemistry and immunoelectron microscopy demonstrated very abundant labeling of the apical plasma membrane and relatively little labeling of intracellular vesicles in collecting duct cells from rats with severe CHF, consistent with enhanced trafficking of AQP2 to the apical plasma membrane. The selective increase in AQP2 expression and enhanced plasma membrane targeting provide an explanation for the development of water retention and hyponatremia in severe CHF.
Resumo:
The levels in Sn-129 populated from the beta(-) decay of In-129 isomers were investigated at the ISOLDE facility of CERN using the newly commissioned ISOLDE Decay Station (IDS). The lowest 1/2(+) state and the 3/2(+) ground state in 129Sn are expected to have configurations dominated by the neutron s(1/2) (l = 0) and d(3/2) (l = 2) single-particle states, respectively. Consequently, these states should be connected by a somewhat slow l-forbidden M1 transition. Using fast-timing spectroscopy we havemeasured the half-life of the 1/2(+) 315.3-keV state, T-1/2 = 19(10) ps, which corresponds to a moderately fast M1 transition. Shell-model calculations using the CD-Bonn effective interaction, with standard effective charges and g factors, predict a 4-ns half-life for this level. We can reconcile the shell-model calculations to the measured T-1/2 value by the renormalization of the M1 effective operator for neutron holes.
Resumo:
El objetivo general de este proyecto se centra en el estudio, desarrollo y experimentación de diferentes técnicas y sistemas basados en Tecnologías del Lenguaje Humano (TLH) para el desarrollo de la próxima generación de sistemas de procesamiento inteligente de la información digital (modelado, recuperación, tratamiento, comprensión y descubrimiento) afrontando los actuales retos de la comunicación digital. En este nuevo escenario, los sistemas deben incorporar capacidades de razonamiento que descubrirán la subjetividad de la información en todos sus contextos (espacial, temporal y emocional) analizando las diferentes dimensiones de uso (multilingualidad, multimodalidad y registro).
Resumo:
The exponential growth of the subjective information in the framework of the Web 2.0 has led to the need to create Natural Language Processing tools able to analyse and process such data for multiple practical applications. They require training on specifically annotated corpora, whose level of detail must be fine enough to capture the phenomena involved. This paper presents EmotiBlog – a fine-grained annotation scheme for subjectivity. We show the manner in which it is built and demonstrate the benefits it brings to the systems using it for training, through the experiments we carried out on opinion mining and emotion detection. We employ corpora of different textual genres –a set of annotated reported speech extracted from news articles, the set of news titles annotated with polarity and emotion from the SemEval 2007 (Task 14) and ISEAR, a corpus of real-life self-expressed emotion. We also show how the model built from the EmotiBlog annotations can be enhanced with external resources. The results demonstrate that EmotiBlog, through its structure and annotation paradigm, offers high quality training data for systems dealing both with opinion mining, as well as emotion detection.
Resumo:
Debido a los cambios que el Espacio Europeo de Educación Superior introduce al potenciar las horas de trabajo no presencial, se hacen necesarios nuevos mecanismos para posibilitar una mejor comunicación y cooperación en el proceso de aprendizaje. Las redes sociales, como Facebook, pueden suministrar estos mecanismos, pero su uso satisfactorio para la docencia puede verse afectado en gran medida por el estilo de aprendizaje de los alumnos. Este artículo plantea la necesidad de estudiar la influencia de los diferentes estilos de aprendizaje en la docencia no presencial mediante el uso de redes sociales con el fin de incrementar el rendimiento de los alumnos. Cabe destacar que este artículo describe el proyecto “Las redes sociales y su relación con los estilos de aprendizaje” a realizar dentro del programa de Redes de Investigación en Docencia Universitaria del Instituto de Ciencias de la Educación de la Universidad de Alicante.