960 resultados para Map-based Cloning
Resumo:
This thesis objective is to discover “How are informal decisions reached by screeners when filtering out undesirable job applications?” Grounded theory techniques were employed in the field to observe and analyse informal decisions at the source by screeners in three distinct empirical studies. Whilst grounded theory provided the method for case and cross-case analysis, literature from academic and non-academic sources was evaluated and integrated to strengthen this research and create a foundation for understanding informal decisions. As informal decisions in early hiring processes have been under researched, this thesis contributes to current knowledge in several ways. First, it locates the Cycle of Employment which enhances Robertson and Smith’s (1993) Selection Paradigm through the integration of stages that individuals occupy whilst seeking employment. Secondly, a general depiction of the Workflow of General Hiring Processes provides a template for practitioners to map and further develop their organisational processes. Finally, it highlights the emergence of the Locality Effect, which is a geographically driven heuristic and bias that can significantly impact recruitment and informal decisions. Although screeners make informal decisions using multiple variables, informal decisions are made in stages as evidence in the Cycle of Employment. Moreover, informal decisions can be erroneous as a result of a majority and minority influence, the weighting of information, the injection of inappropriate information and criteria, and the influence of an assessor. This thesis considers these faults and develops a basic framework of understanding informal decisions to which future research can be launched.
Resumo:
Background: Vigabatrin (VGB) is an anti-epileptic medication which has been linked to peripheral constriction of the visual field. Documenting the natural history associated with continued VGB exposure is important when making decisions about the risk and benefits associated with the treatment. Due to its speed the Swedish Interactive Threshold Algorithm (SITA) has become the algorithm of choice when carrying out Full Threshold automated static perimetry. SITA uses prior distributions of normal and glaucomatous visual field behaviour to estimate threshold sensitivity. As the abnormal model is based on glaucomatous behaviour this algorithm has not been validated for VGB recipients. We aim to assess the clinical utility of the SITA algorithm for accurately mapping VGB attributed field loss. Methods: The sample comprised one randomly selected eye of 16 patients diagnosed with epilepsy, exposed to VGB therapy. A clinical diagnosis of VGB attributed visual field loss was documented in 44% of the group. The mean age was 39.3 years∈±∈14.5 years and the mean deviation was -4.76 dB ±4.34 dB. Each patient was examined with the Full Threshold, SITA Standard and SITA Fast algorithm. Results: SITA Standard was on average approximately twice as fast (7.6 minutes) and SITA Fast approximately 3 times as fast (4.7 minutes) as examinations completed using the Full Threshold algorithm (15.8 minutes). In the clinical environment, the visual field outcome with both SITA algorithms was equivalent to visual field examination using the Full Threshold algorithm in terms of visual inspection of the grey scale plots, defect area and defect severity. Conclusions: Our research shows that both SITA algorithms are able to accurately map visual field loss attributed to VGB. As patients diagnosed with epilepsy are often vulnerable to fatigue, the time saving offered by SITA Fast means that this algorithm has a significant advantage for use with VGB recipients.
Resumo:
ACM Computing Classification System (1998): H.5.2, H.2.8, J.2, H.5.3.
Resumo:
In the present study we investigated the role of spatial locative comprehension in learning and retrieving pathways when landmarks were available and when they were absent in a sample of typically developing 6- to 11-year-old children. Our results show that the more proficient children are in understanding spatial locatives the more they are able to learn pathways, retrieve them after a delay and represent them on a map when landmarks are present in the environment. These findings suggest that spatial language is crucial when individuals rely on sequences of landmarks to drive their navigation towards a given goal but that it is not involved when navigational representations based on the geometrical shape of the environment or the coding of body movements are sufficient for memorizing and recalling short pathways.
Resumo:
An object based image analysis approach (OBIA) was used to create a habitat map of the Lizard Reef. Briefly, georeferenced dive and snorkel photo-transect surveys were conducted at different locations surrounding Lizard Island, Australia. For the surveys, a snorkeler or diver swam over the bottom at a depth of 1-2m in the lagoon, One Tree Beach and Research Station areas, and 7m depth in Watson's Bay, while taking photos of the benthos at a set height using a standard digital camera and towing a surface float GPS which was logging its track every five seconds. The camera lens provided a 1.0 m x 1.0 m footprint, at 0.5 m height above the benthos. Horizontal distance between photos was estimated by fin kicks, and corresponded to a surface distance of approximately 2.0 - 4.0 m. Approximation of coordinates of each benthic photo was done based on the photo timestamp and GPS coordinate time stamp, using GPS Photo Link Software (www.geospatialexperts.com). Coordinates of each photo were interpolated by finding the gps coordinates that were logged at a set time before and after the photo was captured. Dominant benthic or substrate cover type was assigned to each photo by placing 24 points random over each image using the Coral Point Count excel program (Kohler and Gill, 2006). Each point was then assigned a dominant cover type using a benthic cover type classification scheme containing nine first-level categories - seagrass high (>=70%), seagrass moderate (40-70%), seagrass low (<= 30%), coral, reef matrix, algae, rubble, rock and sand. Benthic cover composition summaries of each photo were generated automatically in CPCe. The resulting benthic cover data for each photo was linked to GPS coordinates, saved as an ArcMap point shapefile, and projected to Universal Transverse Mercator WGS84 Zone 56 South. The OBIA class assignment followed a hierarchical assignment based on membership rules with levels for "reef", "geomorphic zone" and "benthic community" (above).
Resumo:
A circumpolar representative and consistent wetland map is required for a range of applications ranging from upscaling of carbon fluxes and pools to climate modelling and wildlife habitat assessment. Currently available data sets lack sufficient accuracy and/or thematic detail in many regions of the Arctic. Synthetic aperture radar (SAR) data from satellites have already been shown to be suitable for wetland mapping. Envisat Advanced SAR (ASAR) provides global medium-resolution data which are examined with particular focus on spatial wetness patterns in this study. It was found that winter minimum backscatter values as well as their differences to summer minimum values reflect vegetation physiognomy units of certain wetness regimes. Low winter backscatter values are mostly found in areas vegetated by plant communities typically for wet regions in the tundra biome, due to low roughness and low volume scattering caused by the predominant vegetation. Summer to winter difference backscatter values, which in contrast to the winter values depend almost solely on soil moisture content, show expected higher values for wet regions. While the approach using difference values would seem more reasonable in order to delineate wetness patterns considering its direct link to soil moisture, it was found that a classification of winter minimum backscatter values is more applicable in tundra regions due to its better separability into wetness classes. Previous approaches for wetland detection have investigated the impact of liquid water in the soil on backscatter conditions. In this study the absence of liquid water is utilized. Owing to a lack of comparable regional to circumpolar data with respect to thematic detail, a potential wetland map cannot directly be validated; however, one might claim the validity of such a product by comparison with vegetation maps, which hold some information on the wetness status of certain classes. It was shown that the Envisat ASAR-derived classes are related to wetland classes of conventional vegetation maps, indicating its applicability; 30% of the land area north of the treeline was identified as wetland while conventional maps recorded 1-7%.
Resumo:
This paper provides a method for constructing a new historical global nitrogen fertilizer application map (0.5° × 0.5° resolution) for the period 1961-2010 based on country-specific information from Food and Agriculture Organization statistics (FAOSTAT) and various global datasets. This new map incorporates the fraction of NH+4 (and NONO-3) in N fertilizer inputs by utilizing fertilizer species information in FAOSTAT, in which species can be categorized as NH+4 and/or NO-3-forming N fertilizers. During data processing, we applied a statistical data imputation method for the missing data (19 % of national N fertilizer consumption) in FAOSTAT. The multiple imputation method enabled us to fill gaps in the time-series data using plausible values using covariates information (year, population, GDP, and crop area). After the imputation, we downscaled the national consumption data to a gridded cropland map. Also, we applied the multiple imputation method to the available chemical fertilizer species consumption, allowing for the estimation of the NH+4/NO-3 ratio in national fertilizer consumption. In this study, the synthetic N fertilizer inputs in 2000 showed a general consistency with the existing N fertilizer map (Potter et al., 2010, doi:10.1175/2009EI288.1) in relation to the ranges of N fertilizer inputs. Globally, the estimated N fertilizer inputs based on the sum of filled data increased from 15 Tg-N to 110 Tg-N during 1961-2010. On the other hand, the global NO-3 input started to decline after the late 1980s and the fraction of NO-3 in global N fertilizer decreased consistently from 35 % to 13 % over a 50-year period. NH+4 based fertilizers are dominant in most countries; however, the NH+4/NO-3 ratio in N fertilizer inputs shows clear differences temporally and geographically. This new map can be utilized as an input data to global model studies and bring new insights for the assessment of historical terrestrial N cycling changes.
Resumo:
UK engineering standards are regulated by the Engineering Council (EC) using a set of generic threshold competence standards which all professionally registered Chartered Engineers in the UK must demonstrate, underpinned by a separate academic qualification at Masters Level. As part of an EC-led national project for the development of work-based learning (WBL) courses leading to Chartered Engineer registration, Aston University has started an MSc Professional Engineering programme, a development of a model originally designed by Kingston University, and build around a set of generic modules which map onto the competence standards. The learning pedagogy of these modules conforms to a widely recognised experiential learning model, with refinements incorporated from a number of other learning models. In particular, the use of workplace mentoring to support the development of critical reflection and to overcome barriers to learning is being incorporated into the learning space. This discussion paper explains the work that was done in collaboration with the EC and a number of Professional Engineering Institutions, to design a course structure and curricular framework that optimises the engineering learning process for engineers already working across a wide range of industries, and to address issues of engineering sustainability. It also explains the thinking behind the work that has been started to provide an international version of the course, built around a set of globalised engineering competences. © 2010 W J Glew, E F Elsworth.
Resumo:
Cumulon is a system aimed at simplifying the development and deployment of statistical analysis of big data in public clouds. Cumulon allows users to program in their familiar language of matrices and linear algebra, without worrying about how to map data and computation to specific hardware and cloud software platforms. Given user-specified requirements in terms of time, monetary cost, and risk tolerance, Cumulon automatically makes intelligent decisions on implementation alternatives, execution parameters, as well as hardware provisioning and configuration settings -- such as what type of machines and how many of them to acquire. Cumulon also supports clouds with auction-based markets: it effectively utilizes computing resources whose availability varies according to market conditions, and suggests best bidding strategies for them. Cumulon explores two alternative approaches toward supporting such markets, with different trade-offs between system and optimization complexity. Experimental study is conducted to show the efficiency of Cumulon's execution engine, as well as the optimizer's effectiveness in finding the optimal plan in the vast plan space.
Resumo:
Recently, blood oxygen level-dependent (BOLD) functional magnetic resonance imaging (fMRI) has become a routine clinical procedure for localization of language and motor brain regions and has been replacing more invasive preoperative procedures. However, the fMRI results from these tasks are not always reproducible even from the same patient. Evaluating the reproducibility of language and speech mapping is especially complicated due to the complex brain circuitry that may become activated during the functional task. Non-language areas such as sensory, attention, decision-making, and motor brain regions may also be activated in addition to the specific language regions during a traditional sentence-completion task. In this study, I test a new approach, which utilizes 4-minute video-based tasks, to map language and speech brain regions for patients undergoing brain surgery. Results from 35 subjects have shown that the video-based task activates Wernicke’s area, as well as Broca’s area in most subjects. The computed laterality indices, which indicate the dominant hemisphere from that functional task, have indicated left dominance from the video-based tasks. This study has shown that the video-based task may be an alternative method for localization of language and speech brain regions for patients who are unable to complete the sentence-completion task.
Resumo:
Users seeking information may not find relevant information pertaining to their information need in a specific language. But information may be available in a language different from their own, but users may not know that language. Thus users may experience difficulty in accessing the information present in different languages. Since the retrieval process depends on the translation of the user query, there are many issues in getting the right translation of the user query. For a pair of languages chosen by a user, resources, like incomplete dictionary, inaccurate machine translation system may exist. These resources may be insufficient to map the query terms in one language to its equivalent terms in another language. Also for a given query, there might exist multiple correct translations. The underlying corpus evidence may suggest a clue to select a probable set of translations that could eventually perform a better information retrieval. In this paper, we present a cross language information retrieval approach to effectively retrieve information present in a language other than the language of the user query using the corpus driven query suggestion approach. The idea is to utilize the corpus based evidence of one language to improve the retrieval and re-ranking of news documents in the other language. We use FIRE corpora - Tamil and English news collections in our experiments and illustrate the effectiveness of the proposed cross language information retrieval approach.
Resumo:
New information on possible resource value of sea floor manganese nodule deposits in the eastern north Pacific has been obtained by a study of records and collections of the 1972 Sea Scope Expedition. Nodule abundance (percent of sea floor covered) varies greatly, according to photographs from eight stations and data from other sources. All estimates considered reliable are plotted on a map of the region. Similar maps show the average content of Ni, Cu, Mn and Co at 89 stations from which three or more nodules were analyzed. Variations in nodule metal content at each station are shown graphically in an appendix, where data on nodule sizes are also given. Results of new analyses of 420 nodules from 93 stations for mn, fe, ni, cu, CO, and zn are listed in another appendix. Relatively high Ni + Cu content is restricted chiefly to four groups of stations in the equatorial region, where group averages are 1.86, 1.99, 2.47, and 2.55 weight-percent. Prepared for United States Department of the Interior, Bureau of Mines. Grant no. GO284008-02-MAS. - NTIS PB82-142571.
Resumo:
Changes in the emission, transport and deposition of aeolian dust have profound effects on regional climate, so that characterizing the lifecycle of dust in observations and improving the representation of dust in global climate models is necessary. A fundamental aspect of characterizing the dust cycle is quantifying surface dust fluxes, yet no spatially explicit estimates of this flux exist for the World's major source regions. Here we present a novel technique for creating a map of the annual mean emitted dust flux for North Africa based on retrievals of dust storm frequency from the Meteosat Second Generation Spinning Enhanced Visible and InfraRed Imager (SEVIRI) and the relationship between dust storm frequency and emitted mass flux derived from the output of five models that simulate dust. Our results suggest that 64 (±16)% of all dust emitted from North Africa is from the Bodélé depression, and that 13 (±3)% of the North African dust flux is from a depression lying in the lee of the Aïr and Hoggar Mountains, making this area the second most important region of emission within North Africa.
Resumo:
The exhibition, The Map of the Empire (30 March – 6 May, 2016), featured photography, video, and installation works by Toronto-based artist, Brad Isaacs (Mohawk | mixed heritage). The majority of the artworks within the exhibition were produced from the Canadian Museum of Nature’s research and collections facility (Gatineau, Québec). The Canadian Museum of Nature (CMN), is the national natural history museum of (what is now called) Canada, with its galleries located in Ottawa, Ontario. The exhibition was the first to open at the Centre for Indigenous Research Creation at Queen’s University under the supervision of Dr. Dylan Robinson. Through the installment of The Map of the Empire, Isaacs effectively claimed space on campus grounds – within the geopolitical space of Katarokwi | Kingston – and pushed back against settler colonial imaginings of natural history. The Map of the Empire explored the capacity of Brad’s artistic practice in challenging the general belief under which natural history museums operate: that the experience of collecting/witnessing/interacting with a deceased and curated more-than-human animal will increase conservation awareness and facilitate human care towards nature. The exhibition also featured original poetry by Cecily Nicholson, author of Triage (2011) and From the Poplars (2014), as a response to Brad’s artwork. I locate the work of The Map of the Empire within the broader context of curatorship as a political practice engaging with conceptual and actualized forms of slow violence, both inside of and beyond the museum space. By unmapping the structures of slow, showcased and archived violence within the natural history museum, we can begin to radically transform and reimagine our connections with more-than-humans and encourage these relations to be reciprocal rather than hyper-curated or preserved.
Resumo:
Background: Implementing effective antenatal care models is a key global policy goal. However, the mechanisms of action of these multi-faceted models that would allow widespread implementation are seldom examined and poorly understood. In existing care model analyses there is little distinction between what is done, how it is done, and who does it. A new evidence-informed quality maternal and newborn care (QMNC) framework identifies key characteristics of quality care. This offers the opportunity to identify systematically the characteristics of care delivery that may be generalizable across contexts, thereby enhancing implementation. Our objective was to map the characteristics of antenatal care models tested in Randomised Controlled Trials (RCTs) to a new evidence-based framework for quality maternal and newborn care; thus facilitating the identification of characteristics of effective care.
Methods: A systematic review of RCTs of midwifery-led antenatal care models. Mapping and evaluation of these models’ characteristics to the QMNC framework using data extraction and scoring forms derived from the five framework components. Paired team members independently extracted data and conducted quality assessment using the QMNC framework and standard RCT criteria.
Results: From 13,050 citations initially retrieved we identified 17 RCTs of midwifery-led antenatal care models from Australia (7), the UK (4), China (2), and Sweden, Ireland, Mexico and Canada (1 each). QMNC framework scores ranged from 9 to 25 (possible range 0–32), with most models reporting fewer than half the characteristics associated with quality maternity care. Description of care model characteristics was lacking in many studies, but was better reported for the intervention arms. Organisation of care was the best-described component. Underlying values and philosophy of care were poorly reported.
Conclusions: The QMNC framework facilitates assessment of the characteristics of antenatal care models. It is vital to understand all the characteristics of multi-faceted interventions such as care models; not only what is done but why it is done, by whom, and how this differed from the standard care package. By applying the QMNC framework we have established a foundation for future reports of intervention studies so that the characteristics of individual models can be evaluated, and the impact of any differences appraised.