Thursday, 11 April 2013

Reflection 10 : Forensic Linguistics


Assalamualaikum w.b.t
Today we learn about Forensic Linguistics which is very interesting and fascinating. People might think of the famous CSI television series when we say the word “forensic”. But this Forensic Linguistics is a branch of Applied Linguistics. Forensic Linguistics takes linguistics knowledge and methods and applies them to the forensic context of law, investigation, trial, and punishment. There are three main areas of application for linguists working in forensic context which are understanding language of the written law, understanding language use in forensic and legal process, as well as the condition of linguistics evidence.

Forensic Linguists involve in areas related to crime, crime solving, and assisting wrongly accused people. Some of these areas include voice identification, author identification, and discourse analysis. Few examples of corpora available in Forensic Linguistics are ransom notes, threatening letters, suicide notes, and examination fraud. While the applications of Forensic Linguistics in language research are author or speaker identification, intertextuality, text typing and linguistic profiling.

Reflection 9 : Computational Stylistics


Assalamualaikum w.b.t
          This time we learned about Computational Stylistics. In pure Computational Stylistics, computers are used to study the stylistic characteristics of particular texts, authors, genres and periods. For example, Raben & Lieberman (1976) used automatically produced indexes to study vocabulary similarity in Milton’s Paradise Lost and Shelley’s Prometheus Unbound. While Burton used a concordancer to compare Anthony & Cleopatra and Richard II.
           Computational Stylistics  is a sub-discipline of computational linguistics. It evolved in the 1960s, in the area of “stylometry,” where the computer is used to generate data on the types, number and length of words and sentences. However, there are risks faced by this application where it forecloses the possibility of an author changing his style from text to text and there is a possibility of two authors writing alike.
          Few of the fields in Computational Stylistics are machine translation, social sciences and humanities, and literary fields; play, poems, novel, short stories and many others. Scope of Computational Stylistics are to count the frequency of common words, and rare words, to detect writing style, producing distinct and unmistakable “literary fingerprint” that can be used to determine if and when there have been collaborations with other text, detection of idiosyncratic uses of language which distinguish one author from another, determining the sentiment of a text, analyzing variation in rhetorical style among scientific articles and few others.
          An example of corpus in Computational Stylistics is anything that are related to literary works that will be chosen, for example; Shakespeare (Romeo and Juliet) and Emily Dickinson’s poems.
          We can also use this application of Computational Stylistics to analyse the stylistics of arabic poems which majorly contribute to da'wah and one of the ways to spread Islamic teachings.

Tuesday, 9 April 2013

Reflection 8 : Lexicography

Assalamualaikum w.b.t
Today we came across a somewhat familiar, but new term in class; lexicography. Lexicography is basically the layers of dictionary production where the editing, compiling, writing or making of a dictionary take place. We were also being introduced to someone namely Professor Kev Nair, who is being regarded as the father of fluency lexicography. Fluency lexicography came into existence as a separate branch of dictionary writing. Interestingly, lexicography is not merely restricted and focusing on English language only, but also other languages like Arabic Lexicography and German Lexicography.
A linguist whose specific expertise is in writing dictionaries is called Lexicographer. A lexicographer is concerned with what words are, what they mean, how the vocabulary of a language is structured, how speakers of the language use and understand the words, how the words evolved and what relationships exists between words.
There are two related disciplines in lexicography which are practical lexicography and theoretical lexicography. Practical lexicography is the art or craft of compiling, writing and editing dictionaries. Practical lexicographic work involves several activities because the compilation of well crafted dictionaries requires careful consideration of few difficult steps such as shaping the intended users, selecting and organizing the components of the dictionary, selecting words and affixes for systemization as entries, selecting collocations, phrases and examples, defining the words, organizing definitions, specifying pronunciations of words, labeling definitions and pronunciations for register and dialect where it is appropriate, as well as designing the best way in which users can access the data in printed and electronic dictionaries.
The other scope is theoretical lexicography. It is basically the scholarly discipline of analyzing and describing the semantic, syntagmatic and paradigmatic relationships within the lexicon or vocabulary of a language. Theoretical lexicography is also related to the idea of developing theories of dictionary components and structures linking the data in dictionaries. It is sometimes being referred to as metalexicography. It concerns the same aspects as lexicography but is meant to lead to the development of principles that can improve the quality of future dictionaries. There are several branches of academic dictionary research such as dictionary criticism which evaluates the quality of one or more dictionaries, dictionary history that involves tracing the traditions of a type of dictionary in a particular country or language, dictionary typology which deals with classifying the various genres of reference works like monolingual versus bilingual dictionary, dictionary structure which involves formatting the various ways in which the information is presented in a dictionary,  and not to forget the branch of dictionary use where observing the reference acts and skills of dictionary users are required. Lastly, the branch of dictionary IT which involves applying computer aids to the process of dictionary compilation. The words in any dictionary compilation were decided upon few main points which are how current they are, reliable, user friendly, more information, and relevancy of the words.

Reflection 7 : Concordancer


Assalamualaikum w.b.t
At the end of learning Corpus Linguistics the last time, we were briefly introduced to concordancer. In this entry, we will further discuss the topic. Concordance is a collection of the occurences of word-form or an index of word-form. Concordancer is the software that analyzes the occurances of collections of the word-form. Be mindful that concordancer does not translate, but to analyze.
What concordancer does?
·         Make wordlists. Can be arranged.
·         Can include frequency and percentage of words.
·         Make wordlists of occurrence of each word in its context.  Contexts can be selected and arranged.
·         Can handle large texts.
·         Can save and print the selected wordlist
Concordancer is widely used in language teaching and learning as well as data mining and data clean-ups. There are many other places where concordancer is helpful such as literary and linguistics scholarship. Some Major Concordance Program for PC are the Oxford Concordance Program (OCP) and Word Cruncher. They are widely used, reliable, flexible and straightforward.

Reflection 6 : Corpus Linguistics



Assalamualaikum w.b.t
Today we learn about Corpus Linguistics. Such an interesting term, isn’t it? Corpus Linguistics is the study of language as expressed in samples or in this case is known as corpora or ‘real world’ text. It is an approach to derive at a set of abstract rules by which a natural language is governed or relates to another language. It was originally done by hand, but corpora are now largely derived by an automated process.
The word ‘corpus’ is derived from the Latin word, meaning ‘body’. It may be used to refer to any text in written or spoken form. In modern Linguistics, this term is used to refer to large collections of texts which represent a sample of a particular variety or use of languages that are presented in machine readable form. Scope of studies in corpus lingiustics related to the possible words, structures or uses in a language, their probable occurrence in a language, as well as the description and explanation of the nature, structure and use of language with particular matters such as language acquisition, variation and change.
There are few types of Corpora available nowadays including written or spoken (transcribed) language, modern or old texts, texts from one language or several languages, texts from whole books, even in newspapers, journals, speeches, and extracts of varying length. Corpus Linguistics is now seen as the study of linguistics phenomena through the large collections of machine-readable texts, corpora. These are used within a number of research areas going from the Descriptive Study of the Syntax of a Language to Language Learning. The availability of corpora which are so similar in structure is a valuable resourse for researchers interested in comparing different language varieties. Interestingly, there is also Quranic Corpus. We Muslims can surely benefit from this insightful thing by attending it in a profound manner. 
As we are learning Computer Assisted Language Learning, of course the role of computers in Corpus Linguistics is essential. Among the role of computers in Corpus Linguistics are to store huge amount of text, quickly retrieve huge amounts of texts, retrieve words, phrases or whole texts in context, sort out linguistic items, increase reliability in searching, counting and sorting linguistic items, as well as provide accurate probability of occurrence of specific linguistic items. 
Some of the Corpus-Related Researches are Computational Linguistics, Historical Linguistics, Lexicography, Machine Translation, Natural Language Processing (NLP), Social Psychology, Sociolinguistics, Stylistics, Computational Linguistics, and many more interesting branches of study.
Later, we learn about something called Concordancer. It is an example of software used for corpus linguistics. Madam Rozina showed us few examples of concordance programs and showed some simple demonstrations on how to use it. Using concordancer, we can do amazing thing such as find out how many times the word ‘Muhammad’ or ‘Islam’ appears in the Quran. We are so thrilled to use the software in the class and search on our own names!