130 resultados para Foreground object segmentation
Resumo:
Graduates are deemed to be a key source of talent within many organisations and thus recruiting, developing and retaining them is viewed as a logical talent management (TM) strategy. However, there has been little attention paid to university graduates as part of an organisation’s TM strategy. Such a specific focus addresses the need for further research into the segmentation of talent pools and the specific challenges different talent pools are likely to create. This research, which utilised a qualitative data collection strategy, examined the experiences and practices of six large UK organisations in relation to graduate TM. Drawing from Gallardo-Gallardo, Dries and González-Cruz’s (2013. What is the meaning of ‘talent’ in the world of work? Human Resource Management Review, 23, 290–300.) framework for the conceptualisation of talent, the findings from this research indicate and explain why graduate employers are frequently compelled to use the object approach (talent as characteristics of people) due to the unique characteristics that recent graduates possess, even though other studies have found that a subject approach (talent as people and what they do) is preferred by most employers. Ultimately, employers conceptualise graduate talent by what they describe as ‘the edge’ which needs to be ‘sharpened’ to fully realise the potential that graduates offer.
Resumo:
There is a perception amongst some of those learning computer programming that the principles of object-oriented programming (where behaviour is often encapsulated across multiple class files) can be difficult to grasp, especially when taught through a traditional, didactic ‘talk-and-chalk’ method or in a lecture-based environment.
We propose a non-traditional teaching method, developed for a government funded teaching training project delivered by Queen’s University, we call it bigCode. In this scenario, learners are provided with many printed, poster-sized fragments of code (in this case either Java or C#). The learners sit on the floor in groups and assemble these fragments into the many classes which make-up an object-oriented program.
Early trials indicate that bigCode is an effective method for teaching object-orientation. The requirement to physically organise the code fragments imitates closely the thought processes of a good software developer when developing object-oriented code.
Furthermore, in addition to teaching the principles involved in object-orientation, bigCode is also an extremely useful technique for teaching learners the organisation and structure of individual classes in Java or C# (as well as the organisation of procedural code). The mechanics of organising fragments of code into complete, correct computer programs give the users first-hand practice of this important skill, and as a result they subsequently find it much easier to develop well-structured code on a computer.
Yet, open questions remain. Is bigCode successful only because we have unknowingly predominantly targeted kinesthetic learners? Is bigCode also an effective teaching approach for other forms of learners, such as visual learners? How scalable is bigCode: in its current form can it be used with large class sizes, or outside the classroom?
Resumo:
The YSOVAR (Young Stellar Object VARiability) Spitzer Space Telescope observing program obtained the first extensive mid-infrared (3.6 and 4.5 μm) time series photometry of the Orion Nebula Cluster plus smaller footprints in 11 other star-forming cores (AFGL 490, NGC 1333, Mon R2, GGD 12-15, NGC 2264, L1688, Serpens Main, Serpens South, IRAS 20050+2720, IC 1396A, and Ceph C). There are ~29,000 unique objects with light curves in either or both IRAC channels in the YSOVAR data set. We present the data collection and reduction for the Spitzer and ancillary data, and define the "standard sample" on which we calculate statistics, consisting of fast cadence data, with epochs roughly twice per day for ~40 days. We also define a "standard sample of members" consisting of all the IR-selected members and X-ray-selected members. We characterize the standard sample in terms of other properties, such as spectral energy distribution shape. We use three mechanisms to identify variables in the fast cadence data—the Stetson index, a χ2 fit to a flat light curve, and significant periodicity. We also identified variables on the longest timescales possible of six to seven years by comparing measurements taken early in the Spitzer mission with the mean from our YSOVAR campaign. The fraction of members in each cluster that are variable on these longest timescales is a function of the ratio of Class I/total members in each cluster, such that clusters with a higher fraction of Class I objects also have a higher fraction of long-term variables. For objects with a YSOVAR-determined period and a [3.6]-[8] color, we find that a star with a longer period is more likely than those with shorter periods to have an IR excess. We do not find any evidence for variability that causes [3.6]-[4.5] excesses to appear or vanish within our data set; out of members and field objects combined, at most 0.02% may have transient IR excesses.
Resumo:
We consider the problem of segmenting text documents that have a
two-part structure such as a problem part and a solution part. Documents
of this genre include incident reports that typically involve
description of events relating to a problem followed by those pertaining
to the solution that was tried. Segmenting such documents
into the component two parts would render them usable in knowledge
reuse frameworks such as Case-Based Reasoning. This segmentation
problem presents a hard case for traditional text segmentation
due to the lexical inter-relatedness of the segments. We develop
a two-part segmentation technique that can harness a corpus
of similar documents to model the behavior of the two segments
and their inter-relatedness using language models and translation
models respectively. In particular, we use separate language models
for the problem and solution segment types, whereas the interrelatedness
between segment types is modeled using an IBM Model
1 translation model. We model documents as being generated starting
from the problem part that comprises of words sampled from
the problem language model, followed by the solution part whose
words are sampled either from the solution language model or from
a translation model conditioned on the words already chosen in the
problem part. We show, through an extensive set of experiments on
real-world data, that our approach outperforms the state-of-the-art
text segmentation algorithms in the accuracy of segmentation, and
that such improved accuracy translates well to improved usability
in Case-based Reasoning systems. We also analyze the robustness
of our technique to varying amounts and types of noise and empirically
illustrate that our technique is quite noise tolerant, and
degrades gracefully with increasing amounts of noise