Find further details of each talk in the Book of Abstracts here.
Those marked with ★ are eligible for nomination to a student researcher award. Find the full list of awards here.
You are welcome to use the comment function at the bottom of the page to comment on papers you have seen and/or submit questions that you would like to see raised in the discussion panel. If replying to an individual paper, please specify who you are talking to.
Panel chaired by Alison Duguid.
Don’t Interrupt Me When I am Speaking: A Multidisciplinary Approach to Interruption Force in Chinese Everyday Conversations ★
Yingnian Tao – Lancaster University
y.tao4@lancaster.ac.uk
@Daphne68947756
https://www.researchgate.net/profile/Yingnian_Tao2
[short paper]

Formulaic sequences in Early Modern English: A corpus-assisted historical pragmatic study ★
Ding Huang – Heidelberg University
ding.huang@stud.uni-heidelberg.de
@elaine_d_huang
[short paper]

Framing advice in the Islamic Sermons Online corpus ★
Cipto Wardoyo – Coventry University
wardoyoc@uni.coventry.ac.uk / ciptowardoyo@uinsgd.ac.id
https://ppsuinsgd.academia.edu/ciptowardoyo
[short paper]

Hausa discourse markers in computer-mediated communication
Tristan Purvis – American University of Nigeria
[short paper]

Investigating professional diplomatic discourse: a qualitative, corpus-based analysis of pragmatic function in the diplomatic cable genre ★
Jessica Stark – Aix-Marseille University
[long paper]

HUM19UK Fiction Corpus: enhancing the methodological reliability of corpus stylistic studies ★
Fransina Stradling, Brian Walker, Dan McIntyre, Michael Burke, Elliott Land & Hazel Price – University of Huddersfield
fransina.stradling2@hud.ac.uk
@f_stradling
linguisticsathuddersfield.com/hum19uk-corpus
[short paper]

Multimodal Corpus Analysis of Tourism Promotional Communication Online ★
Elena Mattei – University of Verona
elena.mattei@univr.it
@ElenaMattei10
http://www.dlls.univr.it/?ent=persona&id=22467
[short paper]

Stancetaking in Vietnamese and English texts in the light of appraisal theory ★
Tieu Thuy Chung – University of Queensland
tieuthuy.chung@uqconnect.edu.au
https://www.researchgate.net/project/Writing-with-Attitude-a-Learner-Corpus-Study-of-Appraisal-Resources-of-Vietnamese-and-Khmers-L2-English-Writing
[short paper]

Textures of John Clare’s sonnets: A corpus-based structural comparison between three master sonneteers
Kazutake Kita – Tokyo University of Science
[short paper]

‘What’s So Special About The Circus?’ ★
Katharine Kavanagh – Cardiff University
kavanaghk@cardiff.ac.uk
@bustingfree
http://TheCircusDiaries.com
[short paper]
@f_stradling: As a researcher interested in the 19th-century literary texts I am much encouraged and excited by your project! Do you have any plan to extend your perspective to construct, for example, a poetry corpus?
LikeLike
Thanks, Kazutake, for your question. I hope you’ll find the corpus useful, and do let us know if what project you end up using it for! We do not have any plans for creating a 19th c poetry reference corpus at present, but it would be a great idea!
LikeLike
@f_stradling: Many thanks to your reply. I’m looking forward to making use of your corpus for my piece of research!
LikeLike
Thank you Tristan Purvis for the interesting talk. Do you see any differences in using discourse markers via CMC the online mode and the offline face-to-face mode, for example, the difference locations of the markers, or the forms like some markers in CMC tend to be abbreviated. And also do you consider some multimodal resources like emojis as a special form of discourse marker in the online mode?
LikeLike
Thanks for you questions, Yingnian. As shared in our panel discussion, I’ll be going about things somewhat backwards–looking at patterns in online data first, while there is not yet comparable corpora or documentation on comparable offline (spoken/conversational) discourse markers. However, I have recordings from radio broadcast discussions (live talk shows) that will be considered at a latter stage (at least for rough, impressionistic comparison to begin with since a comprehensive tagging and analysis of that would be a time-consuming project of its own).
I definitely consider emoji to count as discourse markers. Annotations on these will be carried out after completing the tagging of the lexical targets.
LikeLike
Thank you very much Tristan, looking forward to your paper and further research. 🙂
LikeLike
Katharine! excellent presentation!, What software did you use to make this video?
LikeLike
Thank you! I used a software called Premiere Pro by Adobe, but in the past I’ve done something similar with iMovie on macOS too, although it has much more basic options. It was filmed against a cheap green-screen sheet that I bought off ebay a couple of years ago 🙂
LikeLike
Thank you!
LikeLike
Fransina, thanks for creating such a useful resource and sharing your work! I wondered how you found the quality of the repositories’ files… did you come across many errors? Cheers!
LikeLike
Thanks for your question, Kate! Most texts in our corpus have been taken from Project Gutenberg, which is an electronic library with over 60.000 digital copies of texts. Volunteers proofread each digitized text published on Project Gutenberg before publication and check the OCR’d version against an original physical copy. Because project Gutenberg texts come in multiple formats (e.g. HTML or .txt, with or without illustrations etc.), we sometimes had to delete illustrations or page numbers to avoid non-text characters interfering with eventual analysis and to ensure consistency of presentation across all corpus files, but otherwise we found its texts to be very accurate. We had more problems with texts extracted from other databases, which included Celebration of Women Writers, Victorian Women Writers Project, Chawton House, and Public Library UK. Conversion errors from texts’ original digital format to .txt files meant that we had to do a bit more cleaning up with these texts. Usually this involved checking the text in our .txt file against an original copy, whether digital or physical, and deleting special characters that had accidentally been substituted for text. For some texts this took quite a considerable length of time, so we were grateful we did not have many texts that we had to clean ourselves!
LikeLike