958 resultados para TRANSFORM


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this study, cantilever-enhanced photoacoustic spectroscopy (CEPAS) was applied in different drug detection schemes. The study was divided into two different applications: trace detection of vaporized drugs and drug precursors in the gas-phase, and detection of cocaine abuse in hair. The main focus, however, was the study of hair samples. In the gas-phase, methyl benzoate, a hydrolysis product of cocaine hydrochloride, and benzyl methyl ketone (BMK), a precursor of amphetamine and methamphetamine were investigated. In the solid-phase, hair samples from cocaine overdose patients were measured and compared to a drug-free reference group. As hair consists mostly of long fibrous proteins generally called keratin, proteins from fingernails and saliva were also studied for comparison. Different measurement setups were applied in this study. Gas measurements were carried out using quantum cascade lasers (QLC) as a source in the photoacoustic detection. Also, an external cavity (EC) design was used for a broader tuning range. Detection limits of 3.4 particles per billion (ppb) for methyl benzoate and 26 ppb for BMK in 0.9 s were achieved with the EC-QCL PAS setup. The achieved detection limits are sufficient for realistic drug detection applications. The measurements from drug overdose patients were carried out using Fourier transform infrared (FTIR) PAS. The drug-containing hair samples and drug-free samples were both measured with the FTIR-PAS setup, and the measured spectra were analyzed statistically with principal component analysis (PCA). The two groups were separated by their spectra with PCA and proper spectral pre-processing. To improve the method, ECQCL measurements of the hair samples, and studies using photoacoustic microsampling techniques, were performed. High quality, high-resolution spectra with a broad tuning range were recorded from a single hair fiber. This broad tuning range of an EC-QCL has not previously been used in the photoacoustic spectroscopy of solids. However, no drug detection studies were performed with the EC-QCL solid-phase setup.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Global warming is assertively the greatest environmental challenge for humans of 21st century. It is primarily caused by the anthropogenic greenhouse gas (GHG) that trap heat in the atmosphere. Because of which, the GHG emission mitigation, globally, is a critical issue in the political agenda of all high-profile nations. India, like other developing countries, is facing this threat of climate change while dealing with the challenge of sustaining its rapid economic growth. India’s economy is closely connected to its natural resource base and climate sensitive sectors like water, agriculture and forestry. Due to Climate change the quality and distribution of India’s natural resources may transform and lead to adverse effects on livelihood of its people. Therefore, India is expected to face a major threat due to the projected climate change. This study proposes possible solutions for GHG emission mitigation that are specific to the power sector of India. The methods discussed here will take Indian power sector from present coal dominant ideology to a system, centered with renewable energy sources. The study further proposes a future scenario for 2050, based on the present Indian government policies and global energy technologies advancements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A fitorremediação é uma técnica que objetiva a descontaminação de solo e água, utilizando-se como agente de descontaminação plantas. É uma alternativa aos métodos convencionais de bombeamento e tratamento da água, ou remoção física da camada contaminada de solo, sendo vantajosa principalmente por apresentar potencial para tratamento in situ e ser economicamente viável. Além disso, após extrair o contaminante do solo, a planta armazena-o para tratamento subseqüente, quando necessário, ou mesmo metaboliza-o, podendo, em alguns casos, transformá-lo em produtos menos tóxicos ou mesmo inócuos. A fitorremediação pode ser empregada em solos contaminados por substâncias inorgânicas e/ou orgânicas. Resultados promissores de fitorremediação já foram obtidos para metais pesados, hidrocarbonetos de petróleo, agrotóxicos, explosivos, solventes clorados e subprodutos tóxicos da indústria. A fitorremediação de herbicidas apresenta bons resultados para atrazine, tendo a espécie Kochia scoparia revelado potencial rizosférico para fitoestimular a degradação dessa molécula. Embora ainda incipiente no Brasil, já existem estudos sobre algumas espécies agrícolas cultivadas e espécies silvestres ou nativas da própria área contaminada, com o objetivo de selecionar espécies eficientes na fitorremediação do solo.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The interferometer for low resolution portable Fourier Transform middle infrared spectrometer was developed and studied experimentally. The final aim was a concept for a commercial prototype. Because of the portability, the interferometer should be compact sized and insensitive to the external temperature variations and mechanical vibrations. To minimise the size and manufacturing costs, Michelson interferometer based on plane mirrors and porch swing bearing was selected and no dynamic alignment system was applied. The driving motor was a linear voice coil actuator to avoid mechanical contact of the moving parts. The driving capability for low mirror driving velocities required by the photoacoustic detectors was studied. In total, four versions of such an interferometer were built and experimentally studied. The thermal stability during the external temperature variations and the alignment stability over the mirror travel were measured using the modulation depth of the wide diameter laser beam. Method for estimating the mirror tilt angle from the modulation depth was developed to take account the effect from the non-uniform intensity distribution of the laser beam. The spectrometer stability was finally studied also using the infrared radiation. The latest interferometer was assembled for the middle infrared spectrometer with spectral range from 750 cm−1 to 4500 cm−1. The interferometer size was (197 × 95 × 79) mm3 with the beam diameter of 25 mm. The alignment stability as the change of the tilt angle over the mirror travel of 3 mm was 5 μrad, which decreases the modulation depth only about 0.7 percent in infrared at 3000 cm−1. During the temperature raise, the modulation depth at 3000 cm−1 changed about 1 . . . 2 percentage units per Celsius over short term and even less than 0.2 percentage units per Celsius over the total temperature raise of 30 °C. The unapodised spectral resolution was 4 cm−1 limited by the aperture size. The best achieved signal to noise ratio was about 38 000:1 with commercially available DLaTGS detector. Although the vibration sensitivity requires still improving, the interferometer performed, as a whole, very well and could be further developed to conform all the requirements of the portable and stable spectrometer.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

End-user development is a very common but often largely overlooked phenomenon in information systems research and practice. End-user development means that regular people, the end-users of software, and not professional developers are doing software development. A large number of people are directly or indirectly impacted by the results of these non-professional development activities. The numbers of users performing end-user development activities are difficult to ascertain precisely. But it is very large, and still growing. Computer adoption is growing towards 100% and many new types of computational devices are continually introduced. In addition, other devices not previously programmable are becoming so. This means that, at this very moment, hundreds of millions of people are likely struggling with development problems. Furthermore, software itself is continually being adapted for more flexibility, enabling users to change the behaviour of their software themselves. New software and services are helping to transform users from consumers to producers. Much of this is now found on-line. The problem for the end-user developer is that little of this development is supported by anyone. Often organisations do not notice end-user development and consequently neither provide support for it, nor are equipped to be able to do so. Many end-user developers do not belong to any organisation at all. Also, the end-user development process may be aggravating the problem. End-users are usually not really committed to the development process, which tends to be more iterative and ad hoc. This means support becomes a distant third behind getting the job done and figuring out the development issues to get the job done. Sometimes the software itself may exacerbate the issue by simplifying the development process, deemphasising the difficulty of the task being undertaken. On-line support could be the lifeline the end-user developer needs. Going online one can find all the knowledge one could ever need. However, that does still not help the end-user apply this information or knowledge in practice. A virtual community, through its ability to adopt the end-user’s specific context, could surmount this final obstacle. This thesis explores the concept of end-user development and how it could be supported through on-line sources, in particular virtual communities, which it is argued here, seem to fit the end-user developer’s needs very well. The experiences of real end-user developers and prior literature were used in this process. Emphasis has been on those end-user developers, e.g. small business owners, who may have literally nowhere to turn to for support. Adopting the viewpoint of the end-user developer, the thesis examines the question of how an end-user could use a virtual community effectively, improving the results of the support process. Assuming the common situation where the demand for support outstrips the supply.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sosialistisen vallankumouksen jälkeen Neuvosto-Venäjän oli monien muiden kysymysten ohella ratkaistava sosialismin saavutusten puolustaminen. Aluksi ratkaisuksi suunniteltiin vapaaehtoisuuteen perustuvaa punakaartia, mutta riittävän miesvahvuuden turvaamiseksi päädyttiin yleiseen asevelvollisuuteen. Pian Venäjän sisällissodan jälkeen sotataidon suunta painottui enemmän vanhan armeijan asiantuntijoiden näkemysten kuin vallankumoussankarien kokemusten mukaiseksi, vaikka Frunzen puna-armeijalle kirjoittama doktriini perustui luokkataisteluun ja korosti sisällissodassa hyväksi koettua operatiivista liikkuvuutta. Neuvostoliiton ja Venäjän sotataidon perustana on Pietari I:n aloittama länsimainen suuntaus, jota kuitenkin täydentävät vahvat kansalliset piirteet. Venäläisen sotataidon henkisenä isänä voidaan hyvällä syyllä pitää Aleksandr Suvorovia, jonka opetukset näkyvät tekstilainausten lisäksi myös periaatteissa ja sotilaskasvatuksessa. Napoleonin sotien jälkeen perustettu Keisarillinen yleisesikunta-akatemia loi Venäjälle sotatieteellisen tutkimuksen ja opetuksen. Sotatieteen mahdollisuuksia ei 1800-luvun Venäjällä osattu täysin hyödyntää. Aseistuksen kasvavan tehon merkitystä vähättelevä asenne johti sotataidon taantumisen ja katastrofiin Venäjän–Japanin sodassa. Sen kokemuksia analysoidessaan Aleksandr Neznamov kehitti edelleen saksalaista operaation käsitettä ja loi perustan Neuvostoliitossa 1920-luvulla kehitetylle operaatiotaidolle. Neuvostoliittolaisen sotataidon päämääränä oli kehittää taktinen ja operatiivinen ratkaisu aseistuksen tehon kasvun aikaansaamaan puolustuksen ylivoimaisuuteen. Ratkaisussa hyödynnettiin brittien kokemuksia ja tutkimusta. Neuvostoliittolainen taktiikka ja operaatiotaito eivät kuitenkaan olleet brittiläisen mekanisoidun sodankäynnin tai saksalaisen salamasodan itäinen kopioita vaan itsenäisiin ratkaisuihin pohjautuvia. Syvän taistelun ja operaation teoriaa kokeiltiin harjoituksissa, ja sitä kehitettiin Stalinin vuoden 1937 puhdistuksiin saakka. Toisen maailmansodan taisteluissa puna-armeija sovelsi alkuvaiheen katastrofin jälkeen syvän taistelun ja syvän operaation oppeja. Komentajien ja joukkojen taito ei riittänyt teorian vaatimusten mukaiseen toimintaan, siksi syväksi aiotusta taistelusta tuli ajoittain ainoastaan tiheää. Suuren isänmaallisen sodan kokemusten perusteella neuvostoliittolainen sotatiede kehitti yleisjoukkojen taistelun periaatteet, jotka ovat säilyneet muuttumattomina nykypäivään saakka. Kylmän sodan aikakaudella ydin- ja tavanomaisen aseistuksen merkitys sodan ja taistelun kuvassa vaihteli. Lännen sotataidon ja aseteknologian kehitys pakotti Neuvostoliiton siirtymään 1980-luvulla sotilaallisessa ajattelussaan hyökkäyksestä puolustukseen. Neuvostoliiton hajoamisen jälkeen Venäjän sotilaallisen turvallisuuden takaajana on ydinaseistus. Yhdysvaltain tavanomainen ilma-avaruushyökkäyskyky vaatii Venäjää kehittämään torjuntajärjestelmiä. Tavanomaisten joukkojen rakentamisessa Venäjä seuraa tarkasti läntisen sotataidon kehittymistä, mutta pitäytyy omaperäisiin ratkaisuihin, joiden kehittämisessä sen vahvalla sotatieteellisellä järjestelmällä ja dialektisen materialismin metodilla on edelleen olennaisen tärkeä merkitys.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Biokuvainformatiikan kehittäminen – mikroskopiasta ohjelmistoratkaisuihin – sovellusesimerkkinä α2β1-integriini Kun ihmisen genomi saatiin sekvensoitua vuonna 2003, biotieteiden päätehtäväksi tuli selvittää eri geenien tehtävät, ja erilaisista biokuvantamistekniikoista tuli keskeisiä tutkimusmenetelmiä. Teknologiset kehitysaskeleet johtivat erityisesti fluoresenssipohjaisten valomikroskopiatekniikoiden suosion räjähdysmäiseen kasvuun, mutta mikroskopian tuli muuntua kvalitatiivisesta tieteestä kvantitatiiviseksi. Tämä muutos synnytti uuden tieteenalan, biokuvainformatiikan, jonka on sanottu mahdollisesti mullistavan biotieteet. Tämä väitöskirja esittelee laajan, poikkitieteellisen työkokonaisuuden biokuvainformatiikan alalta. Väitöskirjan ensimmäinen tavoite oli kehittää protokollia elävien solujen neliulotteiseen konfokaalimikroskopiaan, joka oli yksi nopeimmin kasvavista biokuvantamismenetelmistä. Ihmisen kollageenireseptori α2β1-integriini, joka on tärkeä molekyyli monissa fysiologisissa ja patologisissa prosesseissa, oli sovellusesimerkkinä. Työssä saavutettiin selkeitä visualisointeja integriinien liikkeistä, yhteenkeräytymisestä ja solun sisään siirtymisestä, mutta työkaluja kuvainformaation kvantitatiiviseen analysointiin ei ollut. Väitöskirjan toiseksi tavoitteeksi tulikin tällaiseen analysointiin soveltuvan tietokoneohjelmiston kehittäminen. Samaan aikaan syntyi biokuvainformatiikka, ja kipeimmin uudella alalla kaivattiin erikoistuneita tietokoneohjelmistoja. Tämän väitöskirjatyön tärkeimmäksi tulokseksi muodostui näin ollen BioImageXD, uudenlainen avoimen lähdekoodin ohjelmisto moniulotteisten biokuvien visualisointiin, prosessointiin ja analysointiin. BioImageXD kasvoi yhdeksi alansa suurimmista ja monipuolisimmista. Se julkaistiin Nature Methods -lehden biokuvainformatiikkaa käsittelevässä erikoisnumerossa, ja siitä tuli tunnettu ja laajalti käytetty. Väitöskirjan kolmas tavoite oli soveltaa kehitettyjä menetelmiä johonkin käytännönläheisempään. Tehtiin keinotekoisia piidioksidinanopartikkeleita, joissa oli "osoitelappuina" α2β1-integriinin tunnistavia vasta-aineita. BioImageXD:n avulla osoitettiin, että nanopartikkeleilla on potentiaalia lääkkeiden täsmäohjaussovelluksissa. Tämän väitöskirjatyön yksi perimmäinen tavoite oli edistää uutta ja tuntematonta biokuvainformatiikan tieteenalaa, ja tämä tavoite saavutettiin erityisesti BioImageXD:n ja sen lukuisten julkaistujen sovellusten kautta. Väitöskirjatyöllä on merkittävää potentiaalia tulevaisuudessa, mutta biokuvainformatiikalla on vakavia haasteita. Ala on liian monimutkainen keskimääräisen biolääketieteen tutkijan hallittavaksi, ja alan keskeisin elementti, avoimen lähdekoodin ohjelmistokehitystyö, on aliarvostettu. Näihin seikkoihin tarvitaan useita parannuksia,

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Can crowdsourcing solutions serve many masters? Can they be beneficial for both, for the layman or native speakers of minority languages on the one hand and serious linguistic research on the other? How did an infrastructure that was designed to support linguistics turn out to be a solution for raising awareness of native languages? Since 2012 the National Library of Finland has been developing the Digitisation Project for Kindred Languages, in which the key objective is to support a culture of openness and interaction in linguistic research, but also to promote crowdsourcing as a tool for participation of the language community in research. In the course of the project, over 1,200 monographs and nearly 111,000 pages of newspapers in Finno-Ugric languages will be digitised and made available in the Fenno-Ugrica digital collection. This material was published in the Soviet Union in the 1920s and 1930s, and users have had only sporadic access to the material. The publication of open-access and searchable materials from this period is a goldmine for researchers. Historians, social scientists and laymen with an interest in specific local publications can now find text materials pertinent to their studies. The linguistically-oriented population can also find writings to delight them: (1) lexical items specific to a given publication, and (2) orthographically-documented specifics of phonetics. In addition to the open access collection, we developed an open source code OCR editor that enables the editing of machine-encoded text for the benefit of linguistic research. This tool was necessary since these rare and peripheral prints often include already archaic characters, which are neglected by modern OCR software developers but belong to the historical context of kindred languages, and are thus an essential part of the linguistic heritage. When modelling the OCR editor, it was essential to consider both the needs of researchers and the capabilities of lay citizens, and to have them participate in the planning and execution of the project from the very beginning. By implementing the feedback iteratively from both groups, it was possible to transform the requested changes as tools for research that not only supported the work of linguistics but also encouraged the citizen scientists to face the challenge and work with the crowdsourcing tools for the benefit of research. This presentation will not only deal with the technical aspects, developments and achievements of the infrastructure but will highlight the way in which user groups, researchers and lay citizens were engaged in a process as an active and communicative group of users and how their contributions were made to mutual benefit.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Workshop at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis presents a framework for segmentation of clustered overlapping convex objects. The proposed approach is based on a three-step framework in which the tasks of seed point extraction, contour evidence extraction, and contour estimation are addressed. The state-of-art techniques for each step were studied and evaluated using synthetic and real microscopic image data. According to obtained evaluation results, a method combining the best performers in each step was presented. In the proposed method, Fast Radial Symmetry transform, edge-to-marker association algorithm and ellipse fitting are employed for seed point extraction, contour evidence extraction and contour estimation respectively. Using synthetic and real image data, the proposed method was evaluated and compared with two competing methods and the results showed a promising improvement over the competing methods, with high segmentation and size distribution estimation accuracy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the framework of the biorefinery concept researchers aspire to optimize the utilization of plant materials, such as agricultural wastes and wood. For most of the known processes, the first steps in the valorisation of biomass are the extraction and purification of the individual components. The obtained raw products by means of a controlled separation can consecutively be modified to result in biofuels or biogas for energy production, but also in value-added products such as additives and important building blocks for the chemical and material industries. Considerable efforts are undertaken in order to substitute the use of oil-based starting materials or at least minimize their processing for the production of everyday goods. Wood is one of the raw materials, which have gained large attention in the last decades and its composition has been studied in detail. Nowadays, the extraction of water-soluble hemicelluloses from wood is well known and so for example xylan can be obtained from hardwoods and O-acetyl galactoglucomannans (GGMs) from softwoods. The aim of this work was to develop water-soluble amphiphilic materials of GGM and to assess their potential use as additives. Furthermore, GGM was also applied as a crosslinker in the synthesis of functional hydrogels for the removal of toxic metals and metalloid ions from aqueous solutions. The distinguished products were obtained by several chemical approaches and analysed by nuclear magnetic resonance spectroscopy (NMR), Fourier transform infrared spectroscopy (FTIR), size exclusion chromatography (SEC), thermal gravimetric analysis (TGA), scanning electron microscope SEM, among others. Bio-based surfactants were produced by applying GGM and different fatty acids as starting materials. On one hand, GGM-grafted-fatty acids were prepared by esterification and on the other hand, well-defined GGM-block-fatty acid derivatives were obtained by linking amino-functional fatty acids to the reducing end of GGM. The reaction conditions for the syntheses were optimized and the resultant amphiphilic GGM derivatives were evaluated concerning their ability to reduce the surface tension of water as surfactants. Furthermore, the block-structured derivatives were tested in respect to their applicability as additives for the surface modification of cellulosic materials. Besides the GGM surfactants with a bio-based hydrophilic and a bio-based hydrophobic part, also GGM block-structured derivatives with a synthetic hydrophobic tail, consisting of a polydimethylsiloxane chain, were prepared and assessed for the hydrophobization of surface of nanofibrillated cellulose films. In order to generate GGM block-structured derivatives containing a synthetic tail with distinguished physical and chemical properties, as well as a tailored chain length, a controlled polymerization method was used. Therefore, firstly an initiator group was introduced at the reducing end of the GGM and consecutively single electron transfer-living radical polymerization (SET-LRP) was performed by applying three different monomers in individual reactions. For the accomplishment of the synthesis and the analysis of the products, challenges related to the solubility of the reactants had to be overcome. Overall, a synthesis route for the production of GGM block-copolymers bearing different synthetic polymer chains was developed and several derivatives were obtained. Moreover, GGM with different molar masses were, after modification, used as a crosslinker in the synthesis of functional hydrogels. Hereby, a cationic monomer was used during the free radical polymerization and the resultant hydrogels were successfully tested for the removal of chromium and arsenic ions from aqueous solutions. The hydrogel synthesis was tailored and materials with distinguished physical properties, such as the swelling rate, were obtained after purification. The results generated in this work underline the potential of bio-based products and the urge to continue carrying out research in order to be able to use more green chemicals for the manufacturing of biorenewable and biodegradable daily products.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We describe a low-cost, high quality device capable of monitoring indirect activity by detecting touch-release events on a conducting surface, i.e., the animal's cage cover. In addition to the detecting sensor itself, the system includes an IBM PC interface for prompt data storage. The hardware/software design, while serving for other purposes, is used to record the circadian activity rhythm pattern of rats with time in an automated computerized fashion using minimal cost computer equipment (IBM PC XT). Once the sensor detects a touch-release action of the rat in the upper portion of the cage, the interface sends a command to the PC which records the time (hours-minutes-seconds) when the activity occurred. As a result, the computer builds up several files (one per detector/sensor) containing a time list of all recorded events. Data can be visualized in terms of actograms, indicating the number of detections per hour, and analyzed by mathematical tools such as Fast Fourier Transform (FFT) or cosinor. In order to demonstrate method validation, an experiment was conducted on 8 Wistar rats under 12/12-h light/dark cycle conditions (lights on at 7:00 a.m.). Results show a biological validation of the method since it detected the presence of circadian activity rhythm patterns in the behavior of the rats

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The rodent endometrium undergoes remarkable modifications during pregnancy, resulting from a redifferentiation of its fibroblasts. During this modification (decidualization), the fibroblasts transform into large, polyhedral cells that establish intercellular junctions. Decidualization proceeds from the subepithelial stroma towards the deep stroma situated next to the myometrium and creates regions composed of cells in different stages of differentiation. We studied by autoradiography whether cells of these different regions have different levels of macromolecular synthesis. Radioactive amino acids or radioactive sulfate were administered to mice during estrus or on different days of pregnancy. The animals were killed 30 min after injection of the precursors and the uteri were processed for light microscope autoradiography. Silver grains were counted over cells of different regions of the endometrium and are reported as the number of silver grains per area. Higher levels of incorporation of amino acids were found in pregnant animals as compared to animals in estrus. In pregnant animals, the region of decidual cells or the region of fibroblasts transforming into decidual cells showed the highest levels of synthesis. Radioactive sulfate incorporation, on the other hand, was generally higher in nonpregnant animals. Animals without decidual cell transformation (nonpregnant and 4th day of pregnancy) showed a differential incorporation by subepithelial and deep stroma fibroblasts. This study shows that regional differences in synthetic activity exist in cells that are in different stages of transformation into decidual cells as well as in different regions of the endometrium of nonpregnant mice