793 resultados para Electronic newspapers Singapore
Resumo:
En aquest treball es pretén abordar la problemàtica de confirmar o refutar si determinades fonts de notícies online mostren algun tipus de biaix que a priori un lector podria detectar per simple intuïció. Per simplificar les tasques d'anàlisi es treballa amb fonts de notícies que disposen d'APIs d'accés als seus articles i que, a més a més, proporcionen anotacions semàntiques (etiquetes) associades a cada notícia a mode de classificació d'aquestes. Per complir amb els objectius plantejats s'analitza i millora un mètode descrit en la bibliografia que permet dur a terme una anàlisi de les etiquetes per tal d'obtenir i aplicar un vocabulari comú a les diferents fonts (procediment de normalització). El programari resultant es presenta com una aplicació implementada en Java i MySQL que recol·lecta notícies anotades semànticament de diferents fonts de notícies online (els diaris The Guardian i The New York Times), les analitza i permet visualitzar els resultats en funció del vocabulari normalitzat per tal d'extreure conclusions sobre quins són els temes més tractats per cada font. Finalment, s'analitzen els resultats obtinguts, es discuteixen i s'extreuen una sèrie de conclusions sobre el mètode de normalització i classificació emprats i es proposen possibles millores per al futur de l'aplicació.
Resumo:
La revista mensual de la Special Libraries Association(SLA), Información Outlook, publica usualmente una columna fija dedicada a cuestiones de copyright. En el número de septiembre de 2001 el artículo se titulaba ¿Tasini: capítulo final¿, y explicaba la resolución del litigio que enfrentó a escritores freelance (los que trabajan por cuenta propia) contra sus antiguas empresas, como el New York Times (NYT), etc. Sin embargo, como en un buen misterio de asesinatos en el que el muerto reaparece, el título de la columna de octubre fue ¿¡El caso que no muere!¿.
Resumo:
Realizamos esta entrevista en los días previos a la aparición de La información.com el 23 de abril de 2009. En el contexto actual de crisis de los medios de comunicación en el que estamos asistiendo a importantes recortes de plantilla e incluso a cierres de cabeceras, es más novedosa si cabe la feliz noticia del nacimiento de un nuevo diario digital. El contexto parece aparentemente no ser el más propicio. ¿Cómo ves la situación actual de los medios en su conjunto y qué espacio aspira a ocupar La información.com? de comunicación en internet son una gota en el océano de la información¿. Esta es una visión ciertamente humilde que parece alejarse de la idea clásica de los medios como ¿cuarto poder¿. ¿Cuál es a tu juicio el papel de los medios de comunicación en este océano informativo que es internet?
Resumo:
This PhD thesis contributes to the problem of resource and service discovery in the context of the composable web. In the current web, mashup technologies allow developers reusing services and contents to build new web applications. However, developers face a problem of information flood when searching for appropriate services or resources for their combination. To contribute to overcoming this problem, a framework is defined for the discovery of services and resources. In this framework, three levels are defined for performing discovery at content, discovery and agente levels. The content level involves the information available in web resources. The web follows the Representational Stateless Transfer (REST) architectural style, in which resources are returned as representations from servers to clients. These representations usually employ the HyperText Markup Language (HTML), which, along with Content Style Sheets (CSS), describes the markup employed to render representations in a web browser. Although the use of SemanticWeb standards such as Resource Description Framework (RDF) make this architecture suitable for automatic processes to use the information present in web resources, these standards are too often not employed, so automation must rely on processing HTML. This process, often referred as Screen Scraping in the literature, is the content discovery according to the proposed framework. At this level, discovery rules indicate how the different pieces of data in resources’ representations are mapped onto semantic entities. By processing discovery rules on web resources, semantically described contents can be obtained out of them. The service level involves the operations that can be performed on the web. The current web allows users to perform different tasks such as search, blogging, e-commerce, or social networking. To describe the possible services in RESTful architectures, a high-level feature-oriented service methodology is proposed at this level. This lightweight description framework allows defining service discovery rules to identify operations in interactions with REST resources. The discovery is thus performed by applying discovery rules to contents discovered in REST interactions, in a novel process called service probing. Also, service discovery can be performed by modelling services as contents, i.e., by retrieving Application Programming Interface (API) documentation and API listings in service registries such as ProgrammableWeb. For this, a unified model for composable components in Mashup-Driven Development (MDD) has been defined after the analysis of service repositories from the web. The agent level involves the orchestration of the discovery of services and contents. At this level, agent rules allow to specify behaviours for crawling and executing services, which results in the fulfilment of a high-level goal. Agent rules are plans that allow introspecting the discovered data and services from the web and the knowledge present in service and content discovery rules to anticipate the contents and services to be found on specific resources from the web. By the definition of plans, an agent can be configured to target specific resources. The discovery framework has been evaluated on different scenarios, each one covering different levels of the framework. Contenidos a la Carta project deals with the mashing-up of news from electronic newspapers, and the framework was used for the discovery and extraction of pieces of news from the web. Similarly, in Resulta and VulneraNET projects the discovery of ideas and security knowledge in the web is covered, respectively. The service level is covered in the OMELETTE project, where mashup components such as services and widgets are discovered from component repositories from the web. The agent level is applied to the crawling of services and news in these scenarios, highlighting how the semantic description of rules and extracted data can provide complex behaviours and orchestrations of tasks in the web. The main contributions of the thesis are the unified framework for discovery, which allows configuring agents to perform automated tasks. Also, a scraping ontology has been defined for the construction of mappings for scraping web resources. A novel first-order logic rule induction algorithm is defined for the automated construction and maintenance of these mappings out of the visual information in web resources. Additionally, a common unified model for the discovery of services is defined, which allows sharing service descriptions. Future work comprises the further extension of service probing, resource ranking, the extension of the Scraping Ontology, extensions of the agent model, and contructing a base of discovery rules. Resumen La presente tesis doctoral contribuye al problema de descubrimiento de servicios y recursos en el contexto de la web combinable. En la web actual, las tecnologías de combinación de aplicaciones permiten a los desarrolladores reutilizar servicios y contenidos para construir nuevas aplicaciones web. Pese a todo, los desarrolladores afrontan un problema de saturación de información a la hora de buscar servicios o recursos apropiados para su combinación. Para contribuir a la solución de este problema, se propone un marco de trabajo para el descubrimiento de servicios y recursos. En este marco, se definen tres capas sobre las que se realiza descubrimiento a nivel de contenido, servicio y agente. El nivel de contenido involucra a la información disponible en recursos web. La web sigue el estilo arquitectónico Representational Stateless Transfer (REST), en el que los recursos son devueltos como representaciones por parte de los servidores a los clientes. Estas representaciones normalmente emplean el lenguaje de marcado HyperText Markup Language (HTML), que, unido al estándar Content Style Sheets (CSS), describe el marcado empleado para mostrar representaciones en un navegador web. Aunque el uso de estándares de la web semántica como Resource Description Framework (RDF) hace apta esta arquitectura para su uso por procesos automatizados, estos estándares no son empleados en muchas ocasiones, por lo que cualquier automatización debe basarse en el procesado del marcado HTML. Este proceso, normalmente conocido como Screen Scraping en la literatura, es el descubrimiento de contenidos en el marco de trabajo propuesto. En este nivel, un conjunto de reglas de descubrimiento indican cómo los diferentes datos en las representaciones de recursos se corresponden con entidades semánticas. Al procesar estas reglas sobre recursos web, pueden obtenerse contenidos descritos semánticamente. El nivel de servicio involucra las operaciones que pueden ser llevadas a cabo en la web. Actualmente, los usuarios de la web pueden realizar diversas tareas como búsqueda, blogging, comercio electrónico o redes sociales. Para describir los posibles servicios en arquitecturas REST, se propone en este nivel una metodología de alto nivel para descubrimiento de servicios orientada a funcionalidades. Este marco de descubrimiento ligero permite definir reglas de descubrimiento de servicios para identificar operaciones en interacciones con recursos REST. Este descubrimiento es por tanto llevado a cabo al aplicar las reglas de descubrimiento sobre contenidos descubiertos en interacciones REST, en un nuevo procedimiento llamado sondeo de servicios. Además, el descubrimiento de servicios puede ser llevado a cabo mediante el modelado de servicios como contenidos. Es decir, mediante la recuperación de documentación de Application Programming Interfaces (APIs) y listas de APIs en registros de servicios como ProgrammableWeb. Para ello, se ha definido un modelo unificado de componentes combinables para Mashup-Driven Development (MDD) tras el análisis de repositorios de servicios de la web. El nivel de agente involucra la orquestación del descubrimiento de servicios y contenidos. En este nivel, las reglas de nivel de agente permiten especificar comportamientos para el rastreo y ejecución de servicios, lo que permite la consecución de metas de mayor nivel. Las reglas de los agentes son planes que permiten la introspección sobre los datos y servicios descubiertos, así como sobre el conocimiento presente en las reglas de descubrimiento de servicios y contenidos para anticipar contenidos y servicios por encontrar en recursos específicos de la web. Mediante la definición de planes, un agente puede ser configurado para descubrir recursos específicos. El marco de descubrimiento ha sido evaluado sobre diferentes escenarios, cada uno cubriendo distintos niveles del marco. El proyecto Contenidos a la Carta trata de la combinación de noticias de periódicos digitales, y en él el framework se ha empleado para el descubrimiento y extracción de noticias de la web. De manera análoga, en los proyectos Resulta y VulneraNET se ha llevado a cabo un descubrimiento de ideas y de conocimientos de seguridad, respectivamente. El nivel de servicio se cubre en el proyecto OMELETTE, en el que componentes combinables como servicios y widgets se descubren en repositorios de componentes de la web. El nivel de agente se aplica al rastreo de servicios y noticias en estos escenarios, mostrando cómo la descripción semántica de reglas y datos extraídos permiten proporcionar comportamientos complejos y orquestaciones de tareas en la web. Las principales contribuciones de la tesis son el marco de trabajo unificado para descubrimiento, que permite configurar agentes para realizar tareas automatizadas. Además, una ontología de extracción ha sido definida para la construcción de correspondencias y extraer información de recursos web. Asimismo, un algoritmo para la inducción de reglas de lógica de primer orden se ha definido para la construcción y el mantenimiento de estas correspondencias a partir de la información visual de recursos web. Adicionalmente, se ha definido un modelo común y unificado para el descubrimiento de servicios que permite la compartición de descripciones de servicios. Como trabajos futuros se considera la extensión del sondeo de servicios, clasificación de recursos, extensión de la ontología de extracción y la construcción de una base de reglas de descubrimiento.
Resumo:
One of the challenges facing the current web is the efficient use of all the available information. The Web 2.0 phenomenon has favored the creation of contents by average users, and thus the amount of information that can be found for diverse topics has grown exponentially in the last years. Initiatives such as linked data are helping to build the Semantic Web, in which a set of standards are proposed for the exchange of data among heterogeneous systems. However, these standards are sometimes not used, and there are still plenty of websites that require naive techniques to discover their contents and services. This paper proposes an integrated framework for content and service discovery and extraction. The framework is divided into several layers where the discovery of contents and services is made in a representational stateless transfer system such as the web. It employs several web mining techniques as well as feature-oriented modeling for the discovery of cross-cutting features in web resources. The framework is used in a scenario of electronic newspapers. An intelligent agent crawls the web for related news, and uses services and visits links automatically according to its goal. This scenario illustrates how the discovery is made at different levels and how the use of semantics helps implement an agent that performs high-level tasks.
Resumo:
El objetivo principal de la investigación es el análisis de los estereotipos de género como nuevo valor noticia en los diarios digitales The Times, El País, Le Monde, Diario de Noticias y Corriere della Sera. La metodología utilizada fue el análisis de contenido de 1688 noticias publicadas entre el 1 de mayo del 2013 y el 1 de mayo del 2014. Los resultados indican que perviven los estereotipos tradicionales de las mujeres especialmente en el caso de El País, El Corriere della Sera y el Jornal de Noticias. Sin embargo, al mismo tiempo aparecen los que denominamos “contraestereotipos” como un nuevo valor noticia caracterizados por presentar a la mujer con valores positivos opuestos a los estereotipos tradicionales especialmente en The Times y Le Monde.
Resumo:
We investigate the low-energy electronic transport across grain boundaries in graphene ribbons and infinite flakes. Using the recursive Green’s function method, we calculate the electronic transmission across different types of grain boundaries in graphene ribbons. We show results for the charge density distribution and the current flow along the ribbon. We study linear defects at various angles with the ribbon direction, as well as overlaps of two monolayer ribbon domains forming a bilayer region. For a class of extended defect lines with periodicity 3, an analytic approach is developed to study transport in infinite flakes. This class of extended grain boundaries is particularly interesting, since the K and K0 Dirac points are superposed.
Resumo:
CONSPECTUS: Two-dimensional (2D) crystals derived from transition metal dichalcogenides (TMDs) are intriguing materials that offer a unique platform to study fundamental physical phenomena as well as to explore development of novel devices. Semiconducting group 6 TMDs such as MoS2 and WSe2 are known for their large optical absorption coefficient and their potential for high efficiency photovoltaics and photodetectors. Monolayer sheets of these compounds are flexible, stretchable, and soft semiconductors with a direct band gap in contrast to their well-known bulk crystals that are rigid and hard indirect gap semiconductors. Recent intense research has been motivated by the distinct electrical, optical, and mechanical properties of these TMD crystals in the ultimate thickness regime. As a semiconductor with a band gap in the visible to near-IR frequencies, these 2D MX2 materials (M = Mo, W; X = S, Se) exhibit distinct excitonic absorption and emission features. In this Account, we discuss how optical spectroscopy of these materials allows investigation of their electronic properties and the relaxation dynamics of excitons. We first discuss the basic electronic structure of 2D TMDs highlighting the key features of the dispersion relation. With the help of theoretical calculations, we further discuss how photoluminescence energy of direct and indirect excitons provide a guide to understanding the evolution of the electronic structure as a function of the number of layers. We also highlight the behavior of the two competing conduction valleys and their role in the optical processes. Intercalation of group 6 TMDs by alkali metals results in the structural phase transformation with corresponding semiconductor-to-metal transition. Monolayer TMDs obtained by intercalation-assisted exfoliation retains the metastable metallic phase. Mild annealing, however, destabilizes the metastable phase and gradually restores the original semiconducting phase. Interestingly, the semiconducting 2H phase, metallic 1T phase, and a charge-density-wave-like 1T' phase can coexist within a single crystalline monolayer sheet. We further discuss the electronic properties of the restacked films of chemically exfoliated MoS2. Finally, we focus on the strong optical absorption and related exciton relaxation in monolayer and bilayer MX2. Monolayer MX2 absorbs as much as 30% of incident photons in the blue region of the visible light despite being atomically thin. This giant absorption is attributed to nesting of the conduction and valence bands, which leads to diversion of optical conductivity. We describe how the relaxation pathway of excitons depends strongly on the excitation energy. Excitation at the band nesting region is of unique significance because it leads to relaxation of electrons and holes with opposite momentum and spontaneous formation of indirect excitons.
Resumo:
Manpower is a basic resource. It is the indispensable means of converting other resources to mankind '.s use and benefit. As a process· of increasing the knowledge, skills, and dexterity of the people of a society, manpower development is the most fundamental means of enabling a nation to acquire the capacities to bring about its desired future state of affairs -- a more mighty and wealthier nation. Singapore's brief nation-building history justifies the emphasis accorded to the importance of good quality human resources and manpower development in economic and socio-political developments. As a tiny island-state with a poor natural resource base, Singapore's long-term survival and development depend ultimately upon the quality and the creative energy of her people. In line with the nation-building goals and strategies of the Republic, as conditioned by her objective setting, Singapore's basic manpower development premise has been one of "quality and not quantity". While implementing the "stop-at-two" family planning and population control programs and the relevant immigration measures to guard against the prospect of a "population explosion", the Government has energetically fostered various educational programs, including vocational training schemes, adult education programs, the youth movement, and the national service scheme to improve the quality of Singaporeans. There is no denying that some of the manpower development measures taken by the Government have imposed sacrifice and hardship on the Singapore citizens. Nevertheless, they are the basic conditions for the island-Republic's long-term survival and development. It is essential iii to note that Singapore's continuing existence and phenomenal-success are largely attributable to the will, capacities and efforts of her leaders and people. In the final analysis, the wealth and the strength of a nation are based upon its ability to conserve, develop and utilize effectively the innate capacities of its people. This is true not only of Singapore but necessarily of other developing nations. It can be safely presumed that since most developing states' concerns about the quality of their human resources and the progress of their nation-building work are inextricably bound to those about the quantity of their population, the "quality and not quantity" motto of Singapore's manpower development programs can also be their guiding principle.
Resumo:
Each player in the financial industry, each bank, stock exchange, government agency, or insurance company operates its own financial information system or systems. By its very nature, financial information, like the money that it represents, changes hands. Therefore the interoperation of financial information systems is the cornerstone of the financial services they support. E-services frameworks such as web services are an unprecedented opportunity for the flexible interoperation of financial systems. Naturally the critical economic role and the complexity of financial information led to the development of various standards. Yet standards alone are not the panacea: different groups of players use different standards or different interpretations of the same standard. We believe that the solution lies in the convergence of flexible E-services such as web-services and semantically rich meta-data as promised by the semantic Web; then a mediation architecture can be used for the documentation, identification, and resolution of semantic conflicts arising from the interoperation of heterogeneous financial services. In this paper we illustrate the nature of the problem in the Electronic Bill Presentment and Payment (EBPP) industry and the viability of the solution we propose. We describe and analyze the integration of services using four different formats: the IFX, OFX and SWIFT standards, and an example proprietary format. To accomplish this integration we use the COntext INterchange (COIN) framework. The COIN architecture leverages a model of sources and receivers’ contexts in reference to a rich domain model or ontology for the description and resolution of semantic heterogeneity.
Resumo:
With a wide diversity of available technologies, it is extremely problematic for SMEs to identify, plan, prioritize and use the correct strategy. Electronic-manufacturing has been evolving for some time, but currently an effective planning framework to assist managers with implementing electronic-manufacturing planning is still lacking. A framework, built around three elements: the Balanced Scorecard, Quality Function Deployment and Value Chain Analysis, is proposed here to assist SMEs in managing complexity in e-manufacturing planning. A case study, carried out in Singapore, demonstrates the practicality and utility of the framework in the context of a real business environment. © World Scientific Publishing Company.
Resumo:
This paper deals with the classification of news items in ePaper, a prototype system of a future personalized newspaper service on a mobile reading device. The ePaper system aggregates news items from various news providers and delivers to each subscribed user (reader) a personalized electronic newspaper, utilizing content-based and collaborative filtering methods. The ePaper can also provide users "standard" (i.e., not personalized) editions of selected newspapers, as well as browsing capabilities in the repository of news items. This paper concentrates on the automatic classification of incoming news using hierarchical news ontology. Based on this classification on one hand, and on the users' profiles on the other hand, the personalization engine of the system is able to provide a personalized paper to each user onto her mobile reading device.
Resumo:
Many U.S. students do not perform well on mathematics assessments with respect to algebra topics such as linear functions, a building-block for other functions. Poor achievement of U.S. middle school students in this topic is a problem. U.S. eighth graders have had average mathematics scores on international comparison tests such as Third International Mathematics Science Study, later known as Trends in Mathematics and Science Study, (TIMSS)-1995, -99, -03, while Singapore students have had highest average scores. U.S. eighth grade average mathematics scores improved on TIMMS-2007 and held steady onTIMMS-2011. Results from national assessments, PISA 2009 and 2012 and National Assessment of Educational Progress of 2007, 2009, and 2013, showed a lack of proficiency in algebra. Results of curriculum studies involving nations in TIMSS suggest that elementary textbooks in high-scoring countries were different than elementary textbooks and middle grades texts were different with respect to general features in the U.S. The purpose of this study was to compare treatments of linear functions in Singapore and U.S. middle grades mathematics textbooks. Results revealed features currently in textbooks. Findings should be valuable to constituencies who wish to improve U.S. mathematics achievement. Portions of eight Singapore and nine U.S. middle school student texts pertaining to linear functions were compared with respect to 22 features in three categories: (a) background features, (b) general features of problems, and (c) specific characterizations of problem practices, problem-solving competency types, and transfer of representation. Features were coded using a codebook developed by the researcher. Tallies and percentages were reported. Welch's t-tests and chi-square tests were used, respectively, to determine whether texts differed significantly for the features and if codes were independent of country. U.S. and Singapore textbooks differed in page appearance and number of pages, problems, and images. Texts were similar in problem appearance. Differences in problems related to assessment of conceptual learning. U.S. texts contained more problems requiring (a) use of definitions, (b) single computation, (c) interpreting, and (d) multiple responses. These differences may stem from cultural differences seen in attitudes toward education. Future studies should focus on density of page, spiral approach, and multiple response problems.