984 resultados para valuable


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Taita Hills in southeastern Kenya form the northernmost part of Africa’s Eastern Arc Mountains, which have been identified by Conservation International as one of the top ten biodiversity hotspots on Earth. As with many areas of the developing world, over recent decades the Taita Hills have experienced significant population growth leading to associated major changes in land use and land cover (LULC), as well as escalating land degradation, particularly soil erosion. Multi-temporal medium resolution multispectral optical satellite data, such as imagery from the SPOT HRV, HRVIR, and HRG sensors, provides a valuable source of information for environmental monitoring and modelling at a landscape level at local and regional scales. However, utilization of multi-temporal SPOT data in quantitative remote sensing studies requires the removal of atmospheric effects and the derivation of surface reflectance factor. Furthermore, for areas of rugged terrain, such as the Taita Hills, topographic correction is necessary to derive comparable reflectance throughout a SPOT scene. Reliable monitoring of LULC change over time and modelling of land degradation and human population distribution and abundance are of crucial importance to sustainable development, natural resource management, biodiversity conservation, and understanding and mitigating climate change and its impacts. The main purpose of this thesis was to develop and validate enhanced processing of SPOT satellite imagery for use in environmental monitoring and modelling at a landscape level, in regions of the developing world with limited ancillary data availability. The Taita Hills formed the application study site, whilst the Helsinki metropolitan region was used as a control site for validation and assessment of the applied atmospheric correction techniques, where multiangular reflectance field measurements were taken and where horizontal visibility meteorological data concurrent with image acquisition were available. The proposed historical empirical line method (HELM) for absolute atmospheric correction was found to be the only applied technique that could derive surface reflectance factor within an RMSE of < 0.02 ps in the SPOT visible and near-infrared bands; an accuracy level identified as a benchmark for successful atmospheric correction. A multi-scale segmentation/object relationship modelling (MSS/ORM) approach was applied to map LULC in the Taita Hills from the multi-temporal SPOT imagery. This object-based procedure was shown to derive significant improvements over a uni-scale maximum-likelihood technique. The derived LULC data was used in combination with low cost GIS geospatial layers describing elevation, rainfall and soil type, to model degradation in the Taita Hills in the form of potential soil loss, utilizing the simple universal soil loss equation (USLE). Furthermore, human population distribution and abundance were modelled with satisfactory results using only SPOT and GIS derived data and non-Gaussian predictive modelling techniques. The SPOT derived LULC data was found to be unnecessary as a predictor because the first and second order image texture measurements had greater power to explain variation in dwelling unit occurrence and abundance. The ability of the procedures to be implemented locally in the developing world using low-cost or freely available data and software was considered. The techniques discussed in this thesis are considered equally applicable to other medium- and high-resolution optical satellite imagery, as well the utilized SPOT data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose This paper discusses the development of a strategy game for enterprise education. It is argued that requiring students to initially struggle with the game’s rules and strategies results in a worthwhile test of their persistence and ability to manage ambiguity. Further, that in the absence of uncertainty, students will not benefit from the game’s potential contribution to their overall learning. Approach The paper is constructed around the infusion of student narratives and the author’s self-reflective thoughts. The papers explores the process of developing a game that; - 1) provides the students with access to an enterprise reality, - 2) strengthens their engagement with the theoretical foundations of their studies, and; - 3) provides a process for serious self-reflection. Findings Despite the mixed views presented in this paper, the game’s development thus far has been very successful. Students do enjoy and benefit from enduring the frustration of a pure contest. Having to work through uncertainty is a good practice for students in Higher Education, especially those engaged in enterprise education. Practical Implications Whilst the use of games in experiential education is not uncommon, consideration of how and why they are developed is not always well understood. This paper suggests that enterprise educators have significant opportunities to develop games that genuinely provide student access to the entrepreneur’s way of life. Value of Paper This paper provides evidence of how a game can be constructed to add significant value to an existing curriculum. It also provides evidence of the inner thoughts of students frustrated by a challenge they refuse to give up on. As such, it provides a valuable window through which to contemplate the minds of tomorrow’s nascent entrepreneurs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Does the Internet's World Wide Web (the web) represent an opportunity or threat to the small firms of the world? In line with recent calls for entrepreneurship research to adopt a more evolutionary approach, this article considers the context, process, and possible outcomes of small place-based firms operating in a web-impacted environment. Despite initial optimism that the web would provide a level-playing field for firms of all sizes, little evidence exist to support such a notion. When the learning abilities deemed necessary to exploit the web are considered, it would seem that only the most entrepreneurial of small firms would likely adapt to web-impacted environments. It is concluded that the present rate of web-based change represents a unique and valuable research opportunity for entrepreneurship researchers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The idea of ‘wicked’ problems has made a valuable contribution to recognising the complexity and challenges of contemporary planning. However, some wicked policy problems are further complicated by a significant moral, psychological, religious or cultural dimension. This is particularly the case for problems that possess strong elements of abjection and symbolic pollution and high degrees of psychosocial sensitivity. Because this affects the way these problems are framed and discussed they are also characterised by high levels of verbal proscription. As a result, they are not discussed in the rational and emotion-free way that conventional planning demands and can become obscured or inadequately acknowledged in planning processes. This further contributes to their wickedness and intractability. Through paradigmatic urban planning examples, we argue that placing their unspeakable nature at the forefront of enquiry will enable planners to advocate for a more contextually and culturally situated approach to planning, which accommodates both emotional and embodied talk alongside more technical policy contributions. Re-imagining wicked problems in this way has the potential to enhance policy and plan-making and to disrupt norms, expose their contingency, and open new ways of planning for both the unspeakable and the merely wicked.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The antibacterial activity and total phenolic (TP) content of Agaricus bisporus stipes were assessed using solvent and water extracts to determine its bioactivity. Extraction methods included accelerated solvent extraction (ASE) and hot water followed by membrane concentration. Water extract from ASE had the highest TP of 1.08 gallic acid equivalents (GAE)/g dry weight (DW) followed by ethanol at 0.61 mg GAE/g DW and 0.11 mg GAE/g DW for acetone. Acetone extracts inhibited Escherichia coli and Staphylococcus aureus at less than 50%; ethanol inhibited E. coli at 61.9% and S. aureus at 56.6%; and ASE water inhibited E. coli at 78.6% and S. aureus at 65.4%. The TP content of membrane concentrated extract of mushroom was 17 mg GAE in 100 mL. Membrane concentrated water extracts had a higher percentage inhibition on S. aureus than E. coli. Overall, the results were promising for further application of mushroom stipe extracts as a functional food additive. Practical Applications Mushrooms are known for their health benefits and have been identified as a good source of nutrients. The highly perishable nature of mushrooms warrants further processing and preservation to minimize losses along the supply chain. This study explores the possibility of adding value to mushroom stipes, a by-product of the fresh mushroom industry. The extracts assessed indicate the antibacterial activity and phenolic content, and the potential of using these extracts as functional ingredients in the food industry. This study provides valuable information to the scientific community and to the industries developing novel ingredients to meet the market demand for natural food additives.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The study examines various uses of computer technology in acquisition of information for visually impaired people. For this study 29 visually impaired persons took part in a survey about their experiences concerning acquisition of infomation and use of computers, especially with a screen magnification program, a speech synthesizer and a braille display. According to the responses, the evolution of computer technology offers an important possibility for visually impaired people to cope with everyday activities and interacting with the environment. Nevertheless, the functionality of assistive technology needs further development to become more usable and versatile. Since the challenges of independent observation of environment were emphasized in the survey, the study led into developing a portable text vision system called Tekstinäkö. Contrary to typical stand-alone applications, Tekstinäkö system was constructed by combining devices and programs that are readily available on consumer market. As the system operates, pictures are taken by a digital camera and instantly transmitted to a text recognition program in a laptop computer that talks out loud the text using a speech synthesizer. Visually impaired test users described that even unsure interpretations of the texts in the environment given by Tekstinäkö system are at least a welcome addition to complete perception of the environment. It became clear that even with a modest development work it is possible to bring new, useful and valuable methods to everyday life of disabled people. Unconventional production process of the system appeared to be efficient as well. Achieved results and the proposed working model offer one suggestion for giving enough attention to easily overlooked needs of the people with special abilities. ACM Computing Classification System (1998): K.4.2 Social Issues: Assistive technologies for persons with disabilities I.4.9 Image processing and computer vision: Applications Keywords: Visually impaired, computer-assisted, information, acquisition, assistive technology, computer, screen magnification program, speech synthesizer, braille display, survey, testing, text recognition, camera, text, perception, picture, environment, trasportation, guidance, independence, vision, disabled, blind, speech, synthesizer, braille, software engineering, programming, program, system, freeware, shareware, open source, Tekstinäkö, text vision, TopOCR, Autohotkey, computer engineering, computer science

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In competitive combat sporting environments like boxing, the statistics on a boxer's performance, including the amount and type of punches thrown, provide a valuable source of data and feedback which is routinely used for coaching and performance improvement purposes. This paper presents a robust framework for the automatic classification of a boxer's punches. Overhead depth imagery is employed to alleviate challenges associated with occlusions, and robust body-part tracking is developed for the noisy time-of-flight sensors. Punch recognition is addressed through both a multi-class SVM and Random Forest classifiers. A coarse-to-fine hierarchical SVM classifier is presented based on prior knowledge of boxing punches. This framework has been applied to shadow boxing image sequences taken at the Australian Institute of Sport with 8 elite boxers. Results demonstrate the effectiveness of the proposed approach, with the hierarchical SVM classifier yielding a 96% accuracy, signifying its suitability for analysing athletes punches in boxing bouts.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Accounting information systems (AIS) capture and process accounting data and provide valuable information for decision-makers. However, in a rapidly changing environment, continual management of the AIS is necessary for organizations to optimise performance outcomes. We suggest that building a dynamic AIS capability enables accounting process and organizational performance. Using the dynamic capabilities framework (Teece 2007) we propose that a dynamic AIS capability can be developed through the synergy of three competencies: a flexible AIS, having a complementary business intelligence system and accounting professionals with IT technical competency. Using survey data, we find evidence of a positive association between a dynamic AIS capability, accounting process performance, and overall firm performance. The results suggest that developing a dynamic AIS resource can add value to an organization. This study provides guidance for organizations looking to leverage the performance outcomes of their AIS environment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background Although thermal imaging can be a valuable technology in the prevention and management of diabetic foot disease, it is not yet widely used in clinical practice. Technological advancement in infrared imaging increases its application range. The aim was to explore the first steps in the applicability of high-resolution infrared thermal imaging for noninvasive automated detection of signs of diabetic foot disease. Methods The plantar foot surfaces of 15 diabetes patients were imaged with an infrared camera (resolution, 1.2 mm/pixel): 5 patients had no visible signs of foot complications, 5 patients had local complications (e.g., abundant callus or neuropathic ulcer), and 5 patients had difuse complications (e.g., Charcot foot, infected ulcer, or critical ischemia). Foot temperature was calculated as mean temperature across pixels for the whole foot and for specified regions of interest (ROIs). Results No diferences in mean temperature >1.5 °C between the ipsilateral and the contralateral foot were found in patients without complications. In patients with local complications, mean temperatures of the ipsilateral and the contralateral foot were similar, but temperature at the ROI was >2 °C higher compared with the corresponding region in the contralateral foot and to the mean of the whole ipsilateral foot. In patients with difuse complications, mean temperature diferences of >3 °C between ipsilateral and contralateral foot were found. Conclusions With an algorithm based on parameters that can be captured and analyzed with a high-resolution infrared camera and a computer, it is possible to detect signs of diabetic foot disease and to discriminate between no, local, or difuse diabetic foot complications. As such, an intelligent telemedicine monitoring system for noninvasive automated detection of signs of diabetic foot disease is one step closer. Future studies are essential to confirm and extend these promising early findings.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Early detection of (pre-)signs of ulceration on a diabetic foot is valuable for clinical practice. Hyperspectral imaging is a promising technique for detection and classification of such (pre-)signs. However, the number of the spectral bands should be limited to avoid overfitting, which is critical for pixel classification with hyperspectral image data. The goal was to design a detector/classifier based on spectral imaging (SI) with a small number of optical bandpass filters. The performance and stability of the design were also investigated. The selection of the bandpass filters boils down to a feature selection problem. A dataset was built, containing reflectance spectra of 227 skin spots from 64 patients, measured with a spectrometer. Each skin spot was annotated manually by clinicians as "healthy" or a specific (pre-)sign of ulceration. Statistical analysis on the data set showed the number of required filters is between 3 and 7, depending on additional constraints on the filter set. The stability analysis revealed that shot noise was the most critical factor affecting the classification performance. It indicated that this impact could be avoided in future SI systems with a camera sensor whose saturation level is higher than 106, or by postimage processing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Thickness measurements derived from optical coherence tomography (OCT) images of the eye are a fundamental clinical and research metric, since they provide valuable information regarding the eye’s anatomical and physiological characteristics, and can assist in the diagnosis and monitoring of numerous ocular conditions. Despite the importance of these measurements, limited attention has been given to the methods used to estimate thickness in OCT images of the eye. Most current studies employing OCT use an axial thickness metric, but there is evidence that axial thickness measures may be biased by tilt and curvature of the image. In this paper, standard axial thickness calculations are compared with a variety of alternative metrics for estimating tissue thickness. These methods were tested on a data set of wide-field chorio-retinal OCT scans (field of view (FOV) 60° x 25°) to examine their performance across a wide region of interest and to demonstrate the potential effect of curvature of the posterior segment of the eye on the thickness estimates. Similarly, the effect of image tilt was systematically examined with the same range of proposed metrics. The results demonstrate that image tilt and curvature of the posterior segment can affect axial tissue thickness calculations, while alternative metrics, which are not biased by these effects, should be considered. This study demonstrates the need to consider alternative methods to calculate tissue thickness in order to avoid measurement error due to image tilt and curvature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The material I analyze for my master's thesis is a teaching manual used by the Mormons (the Church of Jesus Christ of Latter-day Saints), called "Duties and Blessings of the Priesthood". This work includes numerous lesson plans, each one with a separate topic. The manual is intended especially for teaches, but can also be used for individual study. The main target of my research is to find out how men and their bodies are constructed in the manual. Prescriptive texts together with narrative stories and illustrations create a multifaceted picture of Mormon notions of masculinity and corporeality. I approach my research material from a constructivist perspective. I build my interpretative reading upon Critical Discourse Analysis. I am especially interested in how the manual interprets and understands connections between gender, embodiment and religion. I understand gender in Judith Butler's terms, as a performance of styled and repeated gestures. Some of the discussions I raise in my work draw upon the disciplines of Critical Men's Studies and Sociology of Religion. In Mormonism, gender is thought to be an elementary part of human ontology. It is an eternal trait inherited from God the Father (and God the Mother). The place of men in Mormon cosmology is determined by their double role as patriarchs, fathers and priests. The main objective of mortal life is to gain salvation together with one's family. The personal goal of a Mormon man is to one day become a god. Patriarchs are responsible for the spiritual and material well-being of their family. The head of a household should be gentle and loving, but still an unconditional authority. In the manual, a Mormon man is depicted as a successor of mythical and exemplary men of sacred history. The perfect and sinless body of Jesus Christ serves as an ideal for the male body. Mormon masculinity is also defined by priesthood - the holy power of God - which is given to practically all male Mormons. Through the priesthood, a Mormon man serves as the governor of God on Earth. The Mormon priest has the authority to bind the immanent and the transcendent worlds together with gestures, poses and motions performed with his body. In Mormonism, the body also symbolizes a temple or a space where the sacred meets the profane. Because the priesthood borne by a man is holy, he has to treat his body accordingly. The body is valuable in itself, without it one cannot be saved. Men are forbidden of polluting their bodies by using stimulants or by having sexual relations out of wedlock. A priesthood holder must uphold healthy habits, dress neatly, and conduct himself in a temperate manner. He must also be outgoing and attentive. The manual suggests that a man's goodness or wickedness can be perceived from his external appearance. The Church of Jesus Christ of Latter-day Saints is a hierarchical and man-led organisation. The ideals of gender and corporeality are set by a homogenous priesthood leadership that consists mainly of white heterosexual American men. The larger Mormon community can control individual men by sanctioning. Growing as a Mormon man happens under the guidance of one's reference group.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In What We Owe to Each Other, T.M. Scanlon formulated a new version of the ethical theory called contractualism. This theory took reasons considerations that count in favour of judgment-sensitive attitudes to be the fundamental normative notion. It then used normative reasons to first account for evaluative properties. For an object to be valuable, on this view, is for it to have properties that provide reasons to have favourable attitudes towards the bearer of value. Scanlon also used reasons to account for moral wrongness. His contractualism claims that an act is morally wrong if it is forbidden by any set of moral principles that no one could reasonably reject. My thesis consists of five previously published articles which attempt to clarify Scanlon s theory and to defend it against its critics. The first article defends the idea that normative reason-relations are fundamental against Joshua Gert. Gert argues that rationality is a more basic notion than reasons and that reasons can be analysed in terms of their rationally requiring and justifying dimensions. The second article explores the relationship between value and reasons. It defends Scanlon s view according to which reasons are the more basic than value against those who think that reasons are based on the evaluative realm. The last three articles defend Scanlon s views about moral wrongness. The first one of them discusses a classic objection to contractualist theories. This objection is that principles which no one could reasonably reject are redundant in accounting for wrongness. This is because we need a prior notion of wrongness to select those principles and because such principles are not required to make actions wrong or to provide reasons against wrong actions. The fourth article explores the distinctive reasons which contractualists claim there are for avoiding the wrong actions. The last article argues against the critics of contractualism who claim that contractualism has implausible normative consequences for situations related to the treatment of different-sized groups of people.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Distinguishing critical participatory media from other participatory media forms (for example user-generated content and social media) may be increasingly difficult to do, but nonetheless remains an important task if media studies is to remain relevant to the continuing development of inclusive social political and media cultures. This was one of a number of the premises for a national Australian Research Council-funded study that set out to improve the visibility of critical participatory media, and understand its use for facilitating media participation on a population wide basis (Spurgeon et. al. 2015). The term ‘co-creative’ media was adopted to make this distinction and to describe an informal system of critical participatory media practice that is situated between major public, Indigenous and community arts, culture and media sectors. Although the co-creative media system is found to be a site of innovation and engine for social change its value is still not fully understood. For this reason, this system continues to provide media and cultural studies scholars with valuable sites for researching the sociocultural transformations afforded by new media and communication technologies, as well as their limitations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Volatility is central in options pricing and risk management. It reflects the uncertainty of investors and the inherent instability of the economy. Time series methods are among the most widely applied scientific methods to analyze and predict volatility. Very frequently sampled data contain much valuable information about the different elements of volatility and may ultimately reveal the reasons for time varying volatility. The use of such ultra-high-frequency data is common to all three essays of the dissertation. The dissertation belongs to the field of financial econometrics. The first essay uses wavelet methods to study the time-varying behavior of scaling laws and long-memory in the five-minute volatility series of Nokia on the Helsinki Stock Exchange around the burst of the IT-bubble. The essay is motivated by earlier findings which suggest that different scaling laws may apply to intraday time-scales and to larger time-scales, implying that the so-called annualized volatility depends on the data sampling frequency. The empirical results confirm the appearance of time varying long-memory and different scaling laws that, for a significant part, can be attributed to investor irrationality and to an intraday volatility periodicity called the New York effect. The findings have potentially important consequences for options pricing and risk management that commonly assume constant memory and scaling. The second essay investigates modelling the duration between trades in stock markets. Durations convoy information about investor intentions and provide an alternative view at volatility. Generalizations of standard autoregressive conditional duration (ACD) models are developed to meet needs observed in previous applications of the standard models. According to the empirical results based on data of actively traded stocks on the New York Stock Exchange and the Helsinki Stock Exchange the proposed generalization clearly outperforms the standard models and also performs well in comparison to another recently proposed alternative to the standard models. The distribution used to derive the generalization may also prove valuable in other areas of risk management. The third essay studies empirically the effect of decimalization on volatility and market microstructure noise. Decimalization refers to the change from fractional pricing to decimal pricing and it was carried out on the New York Stock Exchange in January, 2001. The methods used here are more accurate than in the earlier studies and put more weight on market microstructure. The main result is that decimalization decreased observed volatility by reducing noise variance especially for the highly active stocks. The results help risk management and market mechanism designing.