67 resultados para Tablet computers


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

10.00% 10.00%

Publicador:

Resumo:

10-year old boys are writing texts in a National Test in the spring of 2009. The aim of this study is to increase knowledge in and understanding of boys’ writing skills through description, analysis and interpretation of the texts produced by the boys in the National Test in Swedish for junior level year three, taken in Sweden in 2009. The material consists of texts produced by boys and is focused on their ability to write. Through avoiding relating to texts produced by girls, it is possible to search, review, interpret and observe without simultaneously comparing the two genders. The aim of the test is to measure writing proficiency from a normative perspective, while I am investigating content, reception, awareness, and other aspects relevant when producing text. Genres are described through the instruction given in the test, which defines the work that takes place in the classroom and thereby my approach to the analysis. The latter is focused on finding patterns in the competence of the students rather than looking for flaws and limitations. When competence is searched for beyond the relationship to syllabi or the demands of the test in itself, the boys’ texts from the test provide a general foundation for investigating writing proficiency. Person, place and social group have been removed from the texts thereby avoiding aspects of social positioning. The texts are seen from the perspective of 10-year old boys who write texts in a National Test. The theoretical basis as provided by Ivaničs (2004; 2012) offers models for theory on writing. A socio-cultural viewpoint (Smidt, 2009; Säljö, 2000) including literacy and a holistic view on writing is found throughout. By the use of abdicative logic (see 4.4) material and theory work in mutual cooperation. The primary method hermeneutics (Gadamer 1997) and analytical closereading (Gustavsson, 1999) are used dependent on the requirements of the texts. The thesis builds its foundation through the analysis from theoretically diverse areas of science. Central to the thesis is the result that boys who write texts in the National Test, are able to write in two separate genres without conversion or the creating hybrids between the two. Furthermore, the boys inhibit extensive knowledge about other types of texts, gained from TV, film, computers, books, games, and magazines even in such a culturally bound context as a test. Texts the boy has knowledge of through other situations can implicitly be inserted in his own text, or be explicitly written with a name of the main character, title, as well as other signifiers. These texts are written to express and describe what is required in the topic heading of the test. In addition other visible results of the boys’ ability to write well occur though the multitude of methods for analysis throughout the thesis which both search, and find writing competence in the texts written by the boys.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis investigates how mobile technology usage could help to bring Information and communication technologies (ICT) to the people in developing countries. Some people in developing countries have access to use ICT while other people do not have such opportunity. This digital divide among people is present in many developing countries where computers and the Internet are difficult to access. The Internet provides information that can increase productivity and enable markets to function more efficiently. The Internet reduces information travel time and provides more efficient ways for firms and workers to operate. ICT and the Internet can provide opportunities for economic growth and productivity in developing countries. This indicates that it is very important to bridge the digital divide and increase Internet connections in developing countries. The purpose of this thesis is to investigate how can mobile technology and mobile services help to bridge the digital divide in developing countries. Theoretical background of this thesis consists of a collection of articles and reports. Theoretical material was gathered by going through literature on the digital divide, mobile technology and mobile application development. The empirical research was conducted by sending a questionnaire by email to a selection of application developers located in developing countries. The questionnaire’s purpose was to gather qualitative information concerning mobile application development in developing countries. This thesis main result suggests that mobile phones and mobile technology usage can help to bridge the digital divide in developing countries. This study finds that mobile technology provides one of the best tools that can help to bridge the digital divide in developing countries. Mobile technology can bring affordable ICT to people who do not have access to use computers. Smartphones can provide Internet connection, mobile services and mobile applications to a rapidly growing number of mobile phone users in developing countries. New low-cost smartphones empower people in developing countries to have access to information through the Internet. Mobile technology has the potential to help to bridge the digital divide in developing countries where a vast amount of people own mobile phones.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Microsoft System Center Configuration Manager is a systems management product for managing large groups of computers and/or mobile devices. It provides operating system deployment, software distribution, patch management, hardware & software inventory, remote control and many other features for the managed clients. This thesis focuses on researching whether this product is suitable for large, international organization with no previous, centralized solution for managing all such networked devices and detecting areas, where the system can be altered to achieve a more optimal management product from the company’s perspective. The results showed that the system is suitable for such organization if properly configured and clear and transparent line of communication between key IT personnel exists.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Wind energy has obtained outstanding expectations due to risks of global warming and nuclear energy production plant accidents. Nowadays, wind farms are often constructed in areas of complex terrain. A potential wind farm location must have the site thoroughly surveyed and the wind climatology analyzed before installing any hardware. Therefore, modeling of Atmospheric Boundary Layer (ABL) flows over complex terrains containing, e.g. hills, forest, and lakes is of great interest in wind energy applications, as it can help in locating and optimizing the wind farms. Numerical modeling of wind flows using Computational Fluid Dynamics (CFD) has become a popular technique during the last few decades. Due to the inherent flow variability and large-scale unsteadiness typical in ABL flows in general and especially over complex terrains, the flow can be difficult to be predicted accurately enough by using the Reynolds-Averaged Navier-Stokes equations (RANS). Large- Eddy Simulation (LES) resolves the largest and thus most important turbulent eddies and models only the small-scale motions which are more universal than the large eddies and thus easier to model. Therefore, LES is expected to be more suitable for this kind of simulations although it is computationally more expensive than the RANS approach. With the fast development of computers and open-source CFD software during the recent years, the application of LES toward atmospheric flow is becoming increasingly common nowadays. The aim of the work is to simulate atmospheric flows over realistic and complex terrains by means of LES. Evaluation of potential in-land wind park locations will be the main application for these simulations. Development of the LES methodology to simulate the atmospheric flows over realistic terrains is reported in the thesis. The work also aims at validating the LES methodology at a real scale. In the thesis, LES are carried out for flow problems ranging from basic channel flows to real atmospheric flows over one of the most recent real-life complex terrain problems, the Bolund hill. All the simulations reported in the thesis are carried out using a new OpenFOAM® -based LES solver. The solver uses the 4th order time-accurate Runge-Kutta scheme and a fractional step method. Moreover, development of the LES methodology includes special attention to two boundary conditions: the upstream (inflow) and wall boundary conditions. The upstream boundary condition is generated by using the so-called recycling technique, in which the instantaneous flow properties are sampled on aplane downstream of the inlet and mapped back to the inlet at each time step. This technique develops the upstream boundary-layer flow together with the inflow turbulence without using any precursor simulation and thus within a single computational domain. The roughness of the terrain surface is modeled by implementing a new wall function into OpenFOAM® during the thesis work. Both, the recycling method and the newly implemented wall function, are validated for the channel flows at relatively high Reynolds number before applying them to the atmospheric flow applications. After validating the LES model over simple flows, the simulations are carried out for atmospheric boundary-layer flows over two types of hills: first, two-dimensional wind-tunnel hill profiles and second, the Bolund hill located in Roskilde Fjord, Denmark. For the twodimensional wind-tunnel hills, the study focuses on the overall flow behavior as a function of the hill slope. Moreover, the simulations are repeated using another wall function suitable for smooth surfaces, which already existed in OpenFOAM® , in order to study the sensitivity of the flow to the surface roughness in ABL flows. The simulated results obtained using the two wall functions are compared against the wind-tunnel measurements. It is shown that LES using the implemented wall function produces overall satisfactory results on the turbulent flow over the two-dimensional hills. The prediction of the flow separation and reattachment-length for the steeper hill is closer to the measurements than the other numerical studies reported in the past for the same hill geometry. The field measurement campaign performed over the Bolund hill provides the most recent field-experiment dataset for the mean flow and the turbulence properties. A number of research groups have simulated the wind flows over the Bolund hill. Due to the challenging features of the hill such as the almost vertical hill slope, it is considered as an ideal experimental test case for validating micro-scale CFD models for wind energy applications. In this work, the simulated results obtained for two wind directions are compared against the field measurements. It is shown that the present LES can reproduce the complex turbulent wind flow structures over a complicated terrain such as the Bolund hill. Especially, the present LES results show the best prediction of the turbulent kinetic energy with an average error of 24.1%, which is a 43% smaller than any other model results reported in the past for the Bolund case. Finally, the validated LES methodology is demonstrated to simulate the wind flow over the existing Muukko wind farm located in South-Eastern Finland. The simulation is carried out only for one wind direction and the results on the instantaneous and time-averaged wind speeds are briefly reported. The demonstration case is followed by discussions on the practical aspects of LES for the wind resource assessment over a realistic inland wind farm.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tutkimuksessa selvitettiin, kuinka hyvä tekoäly tietokonepeliin on mahdollista toteuttaa nykytiedolla ja -tekniikalla. Tekoäly rajattiin tarkoittamaan tekoälyn ohjaamia pelihahmoja. Lisäksi yksinkertaisia tekoälytoteutuksia ei huomioitu. Työ toteutettiin tutustumalla aiheeseen liittyvään kirjallisuuteen sekä kehittäjäyhteisön web-sivustojen tietoon. Hyvän tekoälyn kriteereiksi valikoituivat viihdyttävyys ja uskottavuus. Katsaus suosituimpiin toteuttamistekniikoihin ja tekoälyn mahdollisuuksiin osoitti, että teoriassa hyvinkin edistynyt tekoäly on toteutettavissa. Käytännössä tietokoneen rajalliset resurssit, kehittäjien rajalliset taidot ja pelinkehitysprojektien asettamat vaatimukset näyttävät kuitenkin rajoittavan tekoälyn toteuttamista kaupallisessa tuotteessa.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Työn tavoitteena on soveltaa yrityksen perustamiseen, innovaation diffuusioon ja strategiseen suunnitteluun liittyvää teoriaa Delitaz Oy:n kehitykseen liiketoimintamahdollisuuden havaitsemisesta uudeksi yritykseksi. Työssä etsitään keinoja yrityksen kehittämän innovatiivisen tuotteen saamiseksi nopeammin markkinoille ja lähtökohtia tulevaa liiketoimintasuunnitelman päivitystä varten. Työssä kootaan yhteen uuden yrityksen perustamisen vaiheisiin, selviytymiseen ja menestymiseen liittyvää teoriaa liiketoimintamahdollisuuden havaitsemisesta uuden inno-vatiivisen tuotteen markkinoille tuomiseen ja yrityksen strategiseen suunnitteluun. Tapaus-osiossa perehdytään kohdeyrityksen perustamisen vaiheisiin, toimintaympäristöön, tuotteeseen ja liiketoimintaan. Työn menetelmänä on tapaus- kehittämis- ja toimintatutkimuksen yhdistelmä. Kehittämis- tai toimintatutkimuksen varsinainen käytännön testaamisvaihe jää työn ulkopuolelle, sillä haluttuja muutoksia lähdetään vasta toteuttamaan. Johtopäätöksinä kuvataan yhden liiketoimintamahdollisuuden kehittyminen ideasta tuotteeksi ja teknologiayritykseksi yrityksen perustamisen teoriaan verrattuna ja esitetään keinoja tuotteen markkinoille saamisen nopeuttamiseksi ja mahdollisia strategisia vaihtoehtoja liiketoimintasuunnitelman päivittämiseksi.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tässä kandidaatintyössä luodaan kattava katsaus erilaisiin PC-laiteissa toimiviin usean näytön käyttöönottomenetelmiin, joita on olemassa useita ominaisuuksiltaan ja käyttötarkoituksiltaan erilaisia. Työssä perehdytään Windowsin usean näytön tuen historiaan ja sen kehitykseen eri Windows versioiden välillä tuen alkuajoista 1990-luvulta nykyaikaan aina viimeisimpiin Windows käyttöjärjestelmiin asti. Lopuksi tarkastellaan vielä pelien usean näytön tukea ja kuinka hyödyntää useaa näyttöä sellaisissa peleissä, jotka eivät sitä sisäänrakennetusti tue.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Personalized medicine will revolutionize our capabilities to combat disease. Working toward this goal, a fundamental task is the deciphering of geneticvariants that are predictive of complex diseases. Modern studies, in the formof genome-wide association studies (GWAS) have afforded researchers with the opportunity to reveal new genotype-phenotype relationships through the extensive scanning of genetic variants. These studies typically contain over half a million genetic features for thousands of individuals. Examining this with methods other than univariate statistics is a challenging task requiring advanced algorithms that are scalable to the genome-wide level. In the future, next-generation sequencing studies (NGS) will contain an even larger number of common and rare variants. Machine learning-based feature selection algorithms have been shown to have the ability to effectively create predictive models for various genotype-phenotype relationships. This work explores the problem of selecting genetic variant subsets that are the most predictive of complex disease phenotypes through various feature selection methodologies, including filter, wrapper and embedded algorithms. The examined machine learning algorithms were demonstrated to not only be effective at predicting the disease phenotypes, but also doing so efficiently through the use of computational shortcuts. While much of the work was able to be run on high-end desktops, some work was further extended so that it could be implemented on parallel computers helping to assure that they will also scale to the NGS data sets. Further, these studies analyzed the relationships between various feature selection methods and demonstrated the need for careful testing when selecting an algorithm. It was shown that there is no universally optimal algorithm for variant selection in GWAS, but rather methodologies need to be selected based on the desired outcome, such as the number of features to be included in the prediction model. It was also demonstrated that without proper model validation, for example using nested cross-validation, the models can result in overly-optimistic prediction accuracies and decreased generalization ability. It is through the implementation and application of machine learning methods that one can extract predictive genotype–phenotype relationships and biological insights from genetic data sets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This literature review aims to clarify what is known about map matching by using inertial sensors and what are the requirements for map matching, inertial sensors, placement and possible complementary position technology. The target is to develop a wearable location system that can position itself within a complex construction environment automatically with the aid of an accurate building model. The wearable location system should work on a tablet computer which is running an augmented reality (AR) solution and is capable of track and visualize 3D-CAD models in real environment. The wearable location system is needed to support the system in initialization of the accurate camera pose calculation and automatically finding the right location in the 3D-CAD model. One type of sensor which does seem applicable to people tracking is inertial measurement unit (IMU). The IMU sensors in aerospace applications, based on laser based gyroscopes, are big but provide a very accurate position estimation with a limited drift. Small and light units such as those based on Micro-Electro-Mechanical (MEMS) sensors are becoming very popular, but they have a significant bias and therefore suffer from large drifts and require method for calibration like map matching. The system requires very little fixed infrastructure, the monetary cost is proportional to the number of users, rather than to the coverage area as is the case for traditional absolute indoor location systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Digitalization has been predicted to change the future as a growing range of non-routine tasks will be automated, offering new kinds of business models for enterprises. Serviceoriented architecture (SOA) provides a basis for designing and implementing welldefined problems as reusable services, allowing computers to execute them. Serviceoriented design has potential to act as a mediator between IT and human resources, but enterprises struggle with their SOA adoption and lack a linkage between the benefits and costs of services. This thesis studies the phenomenon of service reuse in enterprises, proposing an ontology to link different kinds of services with their role conceptually as a part of the business model. The proposed ontology has been created on the basis of qualitative research conducted in three large enterprises. Service reuse has two roles in enterprises: it enables automated data sharing among human and IT resources, and it may provide cost savings in service development and operations. From a technical viewpoint, the ability to define a business problem as a service is one of the key enablers for achieving service reuse. The research proposes two service identification methods, first to identify prospective services in the existing documentation of the enterprise and secondly to model the services from a functional viewpoint, supporting service identification sessions with business stakeholders.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The emergence of depth sensors has made it possible to track – not only monocular cues – but also the actual depth values of the environment. This is especially useful in augmented reality solutions, where the position and orientation (pose) of the observer need to be accurately determined. This allows virtual objects to be installed to the view of the user through, for example, a screen of a tablet or augmented reality glasses (e.g. Google glass, etc.). Although the early 3D sensors have been physically quite large, the size of these sensors is decreasing, and possibly – eventually – a 3D sensor could be embedded – for example – to augmented reality glasses. The wider subject area considered in this review is 3D SLAM methods, which take advantage of the 3D information available by modern RGB-D sensors, such as Microsoft Kinect. Thus the review for SLAM (Simultaneous Localization and Mapping) and 3D tracking in augmented reality is a timely subject. We also try to find out the limitations and possibilities of different tracking methods, and how they should be improved, in order to allow efficient integration of the methods to the augmented reality solutions of the future.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many, if not all, aspects of our everyday lives are related to computers and control. Microprocessors and wireless communications are involved in our lives. Embedded systems are an attracting field because they combine three key factors, small size, low power consumption and high computing capabilities. The aim of this thesis is to study how Linux communicates with the hardware, to answer the question if it is possible to use an operating system like Debian for embedded systems and finally, to build a Mechatronic real time application. In the thesis a presentation of Linux and the Xenomai real time patch is given, the bootloader and communication with the hardware is analyzed. BeagleBone the evaluation board is presented along with the application project consisted of a robot cart with a driver circuit, a line sensor reading a black line and two Xbee antennas. It makes use of Xenomai threads, the real time kernel. According to the obtained results, Linux is able to operate as a real time operating system. The issue of future research is the area of embedded Linux is also discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Postgraduate seminar series with a title Situational Awareness for Critical Infrastructure Protection held at the Department of Military Technology of the National Defence University in 2015. This book is a collection of some of talks that were presented in the seminar. The papers address designing inter-organizational situation awareness system, principles of designing for situation awareness, situation awareness in distributed teams, vulnerability analysis in a critical system context, tactical Command, Control, Communications, Computers, & Intelligence (C4I) systems, and improving situational awareness in the circle of trust. This set of papers tries to give some insight to current issues of the situation awareness for critical infrastructure protection. The seminar has always made a publication of the papers but this has been an internal publication of the Finnish Defence Forces and has not hindered publication of the papers in international conferences. Publication of these papers in peer reviewed conferences has indeed been always the goal of the seminar, since it teaches writing conference level papers. We still hope that an internal publication in the department series is useful to the Finnish Defence Forces by offering an easy access to these papers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The vast majority of our contemporary society owns a mobile phone, which has resulted in a dramatic rise in the amount of networked computers in recent years. Security issues in the computers have followed the same trend and nearly everyone is now affected by such issues. How could the situation be improved? For software engineers, an obvious answer is to build computer software with security in mind. A problem with building software with security is how to define secure software or how to measure security. This thesis divides the problem into three research questions. First, how can we measure the security of software? Second, what types of tools are available for measuring security? And finally, what do these tools reveal about the security of software? Measuring tools of these kind are commonly called metrics. This thesis is focused on the perspective of software engineers in the software design phase. Focus on the design phase means that code level semantics or programming language specifics are not discussed in this work. Organizational policy, management issues or software development process are also out of the scope. The first two research problems were studied using a literature review while the third was studied using a case study research. The target of the case study was a Java based email server called Apache James, which had details from its changelog and security issues available and the source code was accessible. The research revealed that there is a consensus in the terminology on software security. Security verification activities are commonly divided into evaluation and assurance. The focus of this work was in assurance, which means to verify one’s own work. There are 34 metrics available for security measurements, of which five are evaluation metrics and 29 are assurance metrics. We found, however, that the general quality of these metrics was not good. Only three metrics in the design category passed the inspection criteria and could be used in the case study. The metrics claim to give quantitative information on the security of the software, but in practice they were limited to evaluating different versions of the same software. Apart from being relative, the metrics were unable to detect security issues or point out problems in the design. Furthermore, interpreting the metrics’ results was difficult. In conclusion, the general state of the software security metrics leaves a lot to be desired. The metrics studied had both theoretical and practical issues, and are not suitable for daily engineering workflows. The metrics studied provided a basis for further research, since they pointed out areas where the security metrics were necessary to improve whether verification of security from the design was desired.