943 resultados para tablet-PCs (TPC)
Resumo:
With the recent increase in interest in service-oriented architectures (SOA) and Web services, developing applications with the Web services paradigm has become feasible. Web services are self-describing, platform-independent computational elements. New applications can be assembled from a set of previously created Web services, which are composed together to make a service that uses its components to perform a certain task. This is the idea of service composition. To bring service composition to a mobile phone, I have created Interactive Service Composer for mobile phones. With Interactive Service Composer, the user is able to build service compositions on his mobile phone, consisting of Web services or services that are available from the mobile phone itself. The service compositions are reusable and can be saved in the phone's memory. Previously saved compositions can also be used in new compositions. While developing applications for mobile phones has been possible for some time, the usability of the solutions is not the same as when developing for desktop computers. When developing for mobile phones, the developer has to more carefully consider the decisions he is going to make with the program he is developing. With the lack of processing power and memory, the applications cannot function as well as on desktop PCs. On the other hand, this does not remove the appeal of developing applications for mobile devices.
Resumo:
The phosphine distribution in a cylindrical silo containing grain is predicted. A three-dimensional mathematical model, which accounts for multicomponent gas phase transport and the sorption of phosphine into the grain kernel is developed. In addition, a simple model is presented to describe the death of insects within the grain as a function of their exposure to phosphine gas. The proposed model is solved using the commercially available computational fluid dynamics (CFD) software, FLUENT, together with our own C code to customize the solver in order to incorporate the models for sorption and insect extinction. Two types of fumigation delivery are studied, namely, fan- forced from the base of the silo and tablet from the top of the silo. An analysis of the predicted phosphine distribution shows that during fan forced fumigation, the position of the leaky area is very important to the development of the gas flow field and the phosphine distribution in the silo. If the leak is in the lower section of the silo, insects that exist near the top of the silo may not be eradicated. However, the position of a leak does not affect phosphine distribution during tablet fumigation. For such fumigation in a typical silo configuration, phosphine concentrations remain low near the base of the silo. Furthermore, we find that half-life pressure test readings are not an indicator of phosphine distribution during tablet fumigation.
Resumo:
In this chapter, we meet the eight children whose documented lives are the heart of this book. The children are spread across 6 continents, so we have some textual traveling to do. We find each child in a local school. There they venture into literacy along official paths negotiated with their teachers and, also, along unofficial paths tied to their desire for peer companionship and social belonging (Corsaro, 2011; Nelson, 2007). We are most interested in their literate productions—their composing, be it with stick and dirt, pencil, crayons, and paper, tablet computer, or chalk and slate. Each child is a unique story, and each story is told by an author with particular interests in the goings-on in school, that is, with a particular angle of vision. All the authors, though, take us into a child’s educational circumstance; they give us a sense of the school’s physical site and its official curricular guidelines. Most importantly, they collectively allow us a global view of children as symbol users and social participants in the official and the unofficial worlds of school. No matter where young children go to school, they are expected to learn to “write” (although writing, as the cases illustrate, does not always mean “composing”)...
Resumo:
Background: Alcohol consumption and smoking are the main causes of upper digestive tract cancers. These risk factors account for over 75% of all cases in developed countries. Epidemiological studies have shown that alcohol and tobacco interact in a multiplicative way to the cancer risk, but the pathogenetic mechanism behind this is poorly understood. Strong experimental and human genetic linkage data suggest that acetaldehyde is one of the major factors behind the carcinogenic effect. In the digestive tract, acetaldehyde is mainly formed by microbial metabolism of ethanol. Acetaldehyde is also a major constituent of tobacco smoke. Thus, acetaldehyde from both of these sources may have an interacting carcinogenic effect in the human upper digestive tract. Aims: The first aim of this thesis was to investigate acetaldehyde production and exposure in the human mouth resulting from alcohol ingestion and tobacco smoking in vivo. Secondly, specific L-cysteine products were prepared to examine their efficacy in the binding of salivary acetaldehyde in order to reduce the exposure of the upper digestive tract to acetaldehyde. Methods: Acetaldehyde levels in saliva were measured from human volunteers during alcohol metabolism, during tobacco smoking and during the combined use of alcohol and tobacco. The ability of L-cysteine to eliminate acetaldehyde during alcohol metabolism and tobacco smoking was also investigated with specifically developed tablets. Also the acetaldehyde production of Escherichia coli - an important member of the human microbiota - was measured in different conditions prevailing in the digestive tract. Results and conclusions: These studies established that smokers have significantly increased acetaldehyde exposure during ethanol consumption even when not actively smoking. Acetaldehyde exposure was dramatically further increased during active tobacco smoking. Thus, the elevated aerodigestive tract cancer risk observed in smokers and drinkers may be the result of the increased acetaldehyde exposure. Acetaldehyde produced in the oral cavity during ethanol challenge was significantly decreased by a buccal L-cysteine -releasing tablet. Also smoking-derived acetaldehyde could be totally removed by using a tablet containing L-cysteine. In conclusion, this thesis confirms the essential role of acetaldehyde in the pathogenesis of alcohol- and smoking-induced cancers. This thesis presents a novel experimental approach to decrease the local acetaldehyde exposure of the upper digestive tract with L-cysteine, with the eventual goal of reducting the prevalence of upper digestive tract cancers.
Resumo:
Koneellinen annosjakelu on kasvava lääkehuollon osa-alue, jossa lääkkeet pakataan koneellisesti pieniin annoskertakohtaisiin pusseihin kahden viikon erissä. Aikaisemmin lääkevalmisteiden soveltuvuutta koneelliseen annosjakeluun ei ole systemaattisesti tutkittu. Tutkimus tehtiin yhteistyössä Espoonlahden apteekin annosjakeluyksikön kanssa ja sen tavoitteena oli määrittää annosjakeluprosessin kannalta optimaaliset ominaisuudet annosjaeltavalle tabletille rikkoutumisten ja siirtymien vähentämiseksi. Rikkoutuminen on lääkevalmisteen murentumista, puolittumista tai muuta rikkoutumista annosjakelun aikana. Siirtymä on lääkevalmisteen jakelu väärään annospussiin. Prosentuaalisesti rikkoutumisia ja siirtymiä on jakelumäärästä hyvin vähän, mutta määrällisesti paljon ja koko ajan enemmän koneellisen annosjakelun yleistyessä. Rikkoutumiset ja siirtymät aiheuttavat paljon lisätyötä pussien korjaamisen takia, joten niiden määrää on pyrittävä vähentämään. Lisäksi tavoitteena oli selvittää lääkkeiden valmistajilta kysyttävissä olevat asiat lääkevalmisteiden ominaisuuksista ja säilyvyydestä, jotta voitaisiin päätellä valmisteen soveltuvuus koneelliseen annosjakeluun kirjallisen tiedon perusteella. Tutkimuksen tulosten perusteella rikkoutumisten ja siirtymien vähentämiseksi optimaalinen tablettivalmiste annosjakeluun on pienehkö tai keskisuuri, päällystetty, luja ja jakouurteeton ja optimaalinen ilman suhteellinen kosteustaso annosjakeluyksikön tuotantotiloissa olisi noin 30 – 40 %. Lääkkeiden valmistajilta kysyttäviä seikkoja ovat koon, päällysteen, murtolujuuden ja jakouurteen lisäksi valmisteen säilyvyys alkuperäispakkauksen ulkopuolella sekä valmisteen valo-, lämpö- ja kosteusherkkyys. Rikkoutumisten ja siirtymien lisäksi tutkittiin myös kosteusherkän asetyylisalisyylihappovalmisteen (Disperin 100 mg) säilyvyyttä 25 °C ja 60 % RH olosuhteissa, koska tuotantotilojen ilman kosteustasoa ei ole säädelty. Säilyvyystutkimuksen kesto oli neljä viikkoa. Se on riittävä, koska se on enimmäisaika, jonka tabletit ovat annosjakeluprosessin yhteydessä pois alkuperäispakkauksestaan ennen käyttöä. Tabletteja säilytettiin avoimessa alkuperäispakkauksessa (purkki), suljetussa alkuperäispakkauksessa, annosjakelukoneen kasetissa ja kahdessa erilaisessa annospussissa (uusi ja käytössä oleva materiaali). Tulosten mukaan annosjakelukoneen kasetti suojaa kosteudelta yhtä huonosti kuin avoin purkki. Uusi pussimateriaali sen sijaan suojaa kosteudelta paremmin kuin tällä hetkellä käytössä oleva materiaali. Raman -spektroskopiamittausten perusteella asetyylisalisyylihappotableteissa ei ehdi neljän viikon seurannan aikana tapahtua asetyylisalisyylihapon hajoamista salisyylihapoksi. Kosteus heikentää tablettien murtolujuutta, mikä saattaa aiheuttaa enemmän rikkoutumisia. Kosteustaso olisi hyvä olla säädettävissä vakioksi tuotantotiloissa tai purkaa tabletit kasetteihin mahdollisimman lähellä jakelua rikkoutumisten ehkäisemiseksi, etenkin ilman kosteustason ollessa korkea. Lisäksi tutkittiin lääkevalmisteen lämpöherkkyyttä koska annosjakelukoneen saumauslaite altistaa annospussit noin 75 °C lämmölle, jos annosjakelukone pysäytetään kesken työn. Tutkimus tehtiin XRPD:llä, jolla voidaan säätää näytteen lämpötilaa. Lämpöherkkyystutkimusten perusteella 75 °C lämpö ei ehdi tunnin aikana aiheuttaa muutoksia karbamatsepiinitabletissa (Neurotol 200 mg). Tuloksista selvisi, että tutkitun valmisteen sisältämä karbamatsepiini ei kuitenkaan ole lämpöherkin muoto, joten muita lämpöherkkiä lääkevalmisteita tulisi tutkia lisätiedon saamiseksi lämmön vaikutuksista.
Resumo:
Metsäsuunnittelussa tarvittavan metsävaratiedon keräämisessä ollaan Suomessa siirtymässä kuvioittaisesta arvioinnista laserkeilaus- ja ilmakuvapohjaiseen kaukokartoitukseen. Tämän tutkimuksen tarkoitus oli selvittää kuvion kokonaistilavuuden ja läpimittajakauman ennustamisen tarkkuus koealan metsikkö- ja puustotunnuksista MSN-, PRM-, ML- ja FMM-menetelmiä sekä Weibull-jakaumaa hyödyntäen seuraavilla tavoilla: 1. PRM-menetelmällä hilatasolla, 2. PRMmenetelmällä kuviotasolla, 3. ML-menetelmällä hilatasolla ja 4. ML-menetelmällä kuviotasolla. Lisäksi kuvion kokonaistilavuuden ennustamisen tarkkuus selvitettiin hyödyntäen kuviolle tuotettua runkolukusarjaa. Tulokset laskettiin puulajikohtaisesti männylle, kuuselle, koivulle ja muille puulajeille. Puulajien tulokset laskettiin kuviotasolla yhteen. Lisäksi selvitettiin menetelmien laskenta-ajan ja tallennustilan tarve. Tutkimuksen aineistona käytettiin Hämeen ammattikorkeakoulun Evon toimipisteen metsistä mitattuja kiinteäsäteisiä ympyräkoealoja, joita oli 249 kappaletta. Hakkuukoneella mitattiin 12kuvion, joiden pinta-alat vaihtelivat välillä 0,2 – 1,94 hehtaaria, puustotiedot. Aluepohjaisen laserkeilausaineiston pulssin tiheys oli 1,8/m2 ja ilmakuvien pikselikoko 0,5 metriä. Kuvion kokonaistilavuus ennustettiin tai estimoitiin laserkeilaus- ja ilmakuva-aineiston piirteiden avulla koealojen puustotunnuksista. Tulokset laskettiin erikseen kaikille kuvioille ja kuvioille, joiden pinta-ala oli yli 0,5 hehtaaria. Yli 0,5 hehtaarin kuvioita oli 8 kappaletta. Kuvion hilojen naapureina käytettiin 1 - 10 koealaa. Menetelmästä ja naapurien määrästä riippuen kokonaistilavuuden suhteellinen RMSE ja harha vaihtelivat välillä 20,76 – 52,86 prosenttia ja -12,04 – 46,54 prosenttia. Vastaavat luvut yli 0,5 hehtaarin kuvioilla olivat 6,74 – 59,41 prosenttia ja -8,04 – 49,59 prosenttia. Laskenta-aika vaihteli menetelmien ja käytettyjen naapurien määrän mukaan voimakkaasti. Kehittyneemmällä ohjelmoinnilla ja ohjelmistolla laskenta-ajat voivat laskea merkittävästi. Tallennustila ei testatuilla menetelmillä ole rajoittava tekijä laajassakaan mittakaavassa. Läpimittajakauman perusteella PRM-menetelmä ennustaa puulajille erittäin kapean läpimittajakauman, jos koeala koostuu vain muutamasta lähes samankokoisesta puusta. Tämä vaikutti tuloksiin erityisesti menetelmällä PRM2.
Resumo:
Use of natural xanthine derivates in medicine is complicated with their physical properties. Theobromine is poorly soluble while theophylline is highly sensitive to hydration. The aim of this study was to improve bioavailability of xanthines by co-crystallization, theophylline was also cocrystallized with carboxylic acids (capric, citric, glutaric, malenic, malonic, oxalic, stearic, succinic) and HPMC. Co-crystallization was performed by slow evaporation and ball milling. Physical stability was checked by wet granulation and water sorption methods, solubility was measured by intrinsic tablet dissolution. Theobromine formed co-crystal with other xanthines and theophylline interacted with all acids except stearic and HPMC, the latter showed alternative interactions based on hydrogen bonding. Hydration resistance was good in theophylline:succinic acid co-crystal and excellent in complexes containing capric, stearic acids and HPMC. Theophylline:HPMC showed improved solubility. The reported approach can promote use of xanthines and can be recommended for other compounds with similar problems.
Resumo:
The aim of this study was to investigate powder and tablet behavior at the level of mechanical interactions between single particles. Various aspects of powder packing, mixing, compression, and bond formation were examined with the aid of computer simulations. The packing and mixing simulations were based on spring forces interacting between particles. Packing and breakage simulations included systems in which permanent bonds were formed and broken between particles, based on their interaction strengths. During the process, a new simulation environment based on Newtonian mechanics and elementary interactions between the particles was created, and a new method for evaluating mixing was developed. Powder behavior is a complicated process, and many of its aspects are still unclear. Powders as a whole exhibit some aspects of solids and others of liquids. Therefore, their physics is far from clear. However, using relatively simple models based on particle-particle interaction, many powder properties could be replicated during this work. Simulated packing densities were similar to values reported in the literature. The method developed for describing powder mixing correlated well with previous methods. The new method can be applied to determine mixing in completely homogeneous materials, without dividing them into different components. As such, it can describe the efficiency of the mixing method, regardless of the powder's initial setup. The mixing efficiency at different vibrations was examined, and we found that certain combinations of amplitude, direction, and frequencies resulted in better mixing while using less energy. Simulations using exponential force potentials between particles were able to explain the elementary compression behavior of tablets, and create force distributions that were similar to the pressure distributions reported in the literature. Tablet-breaking simulations resulted in breaking strengths that were similar to measured tablet breaking strengths. In general, many aspects of powder behavior can be explained with mechanical interactions at the particle level, and single particle properties can be reliably linked to powder behavior with accurate simulations.
Resumo:
From a find to an ancient costume - reconstruction of archaeological textiles Costume tells who we are. It warms and protects us, but also tells about our identity: gender, age, family, social group, work, religion and ethnicity. Textile fabrication, use and trade have been an important part of human civilization for more than 10 000 years. There are plenty of archaeological textile findings, but they are small, fragmentary and their interpretation requires special skills. Finnish textile findings from the younger Iron Age have already been studied for more than hundred years. They have also been used as a base for several reconstructions called muinaispuku , ancient costume. Thesis surveys the ancient costume reconstruction done in Finland and discusses the objectives of the reconstruction projects. The earlier reconstruction projects are seen as a part of the national project of constructing a glorious past for Finnish nationality, and the part women took in this project. Many earlier reconstructions are designed to be festive costumes for wealthy ladies. In the 1980s and 1990s many new ancient costume reconstructions were made, differing from their predecessors at the pattern of the skirt. They were also done following the principles of making a scientific reconstruction more closely. At the same time historical re-enactment and living history as a hobby have raised in popularity, and the use of ancient costumes is widening from festive occasions to re-enactment purposes. A hypothesis of the textile craft methods used in younger Iron Age Finland is introduced. Archaeological findings from Finland and neighboring countries, ethnological knowledge of textile crafts and experimental archaeology have been used as a basis for this proposition. The yarn was spinned with a spindle, the fabrics woven on a warp-weighted loom and dyed with natural colors. Bronze spiral applications and complicated tablet-woven bands have possibly been done by specialist craftswomen or -men. The knowledge of the techniques and results of experimenting and experimental archaeology gives a possibility to review the success of existing ancient costume reconstructions as scientific reconstructions. Only one costume reconstruction project, the Kaarina costume fabricated in Kurala Kylämäki museum, has been done using as authentic methods as possible. The use of ancient craft methods is time-consuming and expensive. This fact can be seen as one research result itself for it demonstrates how valuable the ancient textiles have been also in their time of use. In the costume reconstruction work, the skill of a craftswoman and her knowledge of ancient working methods is strongly underlined. Textile research is seen as a process, where examination of original textiles and reconstruction experiments discuss with each other. Reconstruction projects can give a lot both to the research of Finnish younger Iron Age and the popularization of archaeological knowledge. The reconstruction is never finished, and also the earlier reconstructions should be reviewed in the light of new findings.
Resumo:
The basic characteristic of a chaotic system is its sensitivity to the infinitesimal changes in its initial conditions. A limit to predictability in chaotic system arises mainly due to this sensitivity and also due to the ineffectiveness of the model to reveal the underlying dynamics of the system. In the present study, an attempt is made to quantify these uncertainties involved and thereby improve the predictability by adopting a multivariate nonlinear ensemble prediction. Daily rainfall data of Malaprabha basin, India for the period 1955-2000 is used for the study. It is found to exhibit a low dimensional chaotic nature with the dimension varying from 5 to 7. A multivariate phase space is generated, considering a climate data set of 16 variables. The chaotic nature of each of these variables is confirmed using false nearest neighbor method. The redundancy, if any, of this atmospheric data set is further removed by employing principal component analysis (PCA) method and thereby reducing it to eight principal components (PCs). This multivariate series (rainfall along with eight PCs) is found to exhibit a low dimensional chaotic nature with dimension 10. Nonlinear prediction employing local approximation method is done using univariate series (rainfall alone) and multivariate series for different combinations of embedding dimensions and delay times. The uncertainty in initial conditions is thus addressed by reconstructing the phase space using different combinations of parameters. The ensembles generated from multivariate predictions are found to be better than those from univariate predictions. The uncertainty in predictions is decreased or in other words predictability is increased by adopting multivariate nonlinear ensemble prediction. The restriction on predictability of a chaotic series can thus be altered by quantifying the uncertainty in the initial conditions and also by including other possible variables, which may influence the system. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
An understanding of application I/O access patterns is useful in several situations. First, gaining insight into what applications are doing with their data at a semantic level helps in designing efficient storage systems. Second, it helps create benchmarks that mimic realistic application behavior closely. Third, it enables autonomic systems as the information obtained can be used to adapt the system in a closed loop.All these use cases require the ability to extract the application-level semantics of I/O operations. Methods such as modifying application code to associate I/O operations with semantic tags are intrusive. It is well known that network file system traces are an important source of information that can be obtained non-intrusively and analyzed either online or offline. These traces are a sequence of primitive file system operations and their parameters. Simple counting, statistical analysis or deterministic search techniques are inadequate for discovering application-level semantics in the general case, because of the inherent variation and noise in realistic traces.In this paper, we describe a trace analysis methodology based on Profile Hidden Markov Models. We show that the methodology has powerful discriminatory capabilities that enable it to recognize applications based on the patterns in the traces, and to mark out regions in a long trace that encapsulate sets of primitive operations that represent higher-level application actions. It is robust enough that it can work around discrepancies between training and target traces such as in length and interleaving with other operations. We demonstrate the feasibility of recognizing patterns based on a small sampling of the trace, enabling faster trace analysis. Preliminary experiments show that the method is capable of learning accurate profile models on live traces in an online setting. We present a detailed evaluation of this methodology in a UNIX environment using NFS traces of selected commonly used applications such as compilations as well as on industrial strength benchmarks such as TPC-C and Postmark, and discuss its capabilities and limitations in the context of the use cases mentioned above.
Resumo:
Estimates of predicate selectivities by database query optimizers often differ significantly from those actually encountered during query execution, leading to poor plan choices and inflated response times. In this paper, we investigate mitigating this problem by replacing selectivity error-sensitive plan choices with alternative plans that provide robust performance. Our approach is based on the recent observation that even the complex and dense "plan diagrams" associated with industrial-strength optimizers can be efficiently reduced to "anorexic" equivalents featuring only a few plans, without materially impacting query processing quality. Extensive experimentation with a rich set of TPC-H and TPC-DS-based query templates in a variety of database environments indicate that plan diagram reduction typically retains plans that are substantially resistant to selectivity errors on the base relations. However, it can sometimes also be severely counter-productive, with the replacements performing much worse. We address this problem through a generalized mathematical characterization of plan cost behavior over the parameter space, which lends itself to efficient criteria of when it is safe to reduce. Our strategies are fully non-invasive and have been implemented in the Picasso optimizer visualization tool.
Resumo:
Given a parametrized n-dimensional SQL query template and a choice of query optimizer, a plan diagram is a color-coded pictorial enumeration of the execution plan choices of the optimizer over the query parameter space. These diagrams have proved to be a powerful metaphor for the analysis and redesign of modern optimizers, and are gaining currency in diverse industrial and academic institutions. However, their utility is adversely impacted by the impractically large computational overheads incurred when standard brute-force exhaustive approaches are used for producing fine-grained diagrams on high-dimensional query templates. In this paper, we investigate strategies for efficiently producing close approximations to complex plan diagrams. Our techniques are customized to the features available in the optimizer's API, ranging from the generic optimizers that provide only the optimal plan for a query, to those that also support costing of sub-optimal plans and enumerating rank-ordered lists of plans. The techniques collectively feature both random and grid sampling, as well as inference techniques based on nearest-neighbor classifiers, parametric query optimization and plan cost monotonicity. Extensive experimentation with a representative set of TPC-H and TPC-DS-based query templates on industrial-strength optimizers indicates that our techniques are capable of delivering 90% accurate diagrams while incurring less than 15% of the computational overheads of the exhaustive approach. In fact, for full-featured optimizers, we can guarantee zero error with less than 10% overheads. These approximation techniques have been implemented in the publicly available Picasso optimizer visualization tool.
Resumo:
Workstation clusters equipped with high performance interconnect having programmable network processors facilitate interesting opportunities to enhance the performance of parallel application run on them. In this paper, we propose schemes where certain application level processing in parallel database query execution is performed on the network processor. We evaluate the performance of TPC-H queries executing on a high end cluster where all tuple processing is done on the host processor, using a timed Petri net model, and find that tuple processing costs on the host processor dominate the execution time. These results are validated using a small cluster. We therefore propose 4 schemes where certain tuple processing activity is offloaded to the network processor. The first 2 schemes offload the tuple splitting activity - computation to identify the node on which to process the tuples, resulting in an execution time speedup of 1.09 relative to the base scheme, but with I/O bus becoming the bottleneck resource. In the 3rd scheme in addition to offloading tuple processing activity, the disk and network interface are combined to avoid the I/O bus bottleneck, which results in speedups up to 1.16, but with high host processor utilization. Our 4th scheme where the network processor also performs apart of join operation along with the host processor, gives a speedup of 1.47 along with balanced system resource utilizations. Further we observe that the proposed schemes perform equally well even in a scaled architecture i.e., when the number of processors is increased from 2 to 64
Resumo:
In this paper, we present an unrestricted Kannada online handwritten character recognizer which is viable for real time applications. It handles Kannada and Indo-Arabic numerals, punctuation marks and special symbols like $, &, # etc, apart from all the aksharas of the Kannada script. The dataset used has handwriting of 69 people from four different locations, making the recognition writer independent. It was found that for the DTW classifier, using smoothed first derivatives as features, enhanced the performance to 89% as compared to preprocessed co-ordinates which gave 85%, but was too inefficient in terms of time. To overcome this, we used Statistical Dynamic Time Warping (SDTW) and achieved 46 times faster classification with comparable accuracy i.e. 88%, making it fast enough for practical applications. The accuracies reported are raw symbol recognition results from the classifier. Thus, there is good scope of improvement in actual applications. Where domain constraints such as fixed vocabulary, language models and post processing can be employed. A working demo is also available on tablet PC for recognition of Kannada words.