As artificial intelligence (AI) continues to evolve and become more commonplace, the Centre on Aging has developed guidance related to the use of AI in activities related to the Centre. This guidance is supplemental to the UM Artificial Intelligence – Guiding Principles.

It should be noted that AI is not a replacement for human work, judgement or critical thinking. There are several issues with AI, including:

  • Content or information from AI tools may not be properly cited or referenced.
  • Content or information from AI tools may be inaccurate, false, misleading, biased or
  • offensive. (Hocky & White, 2022)
  • Content or information may be repetitive or poorly worded/phrased.

Definitions: Non-generative Artificial Intelligence (AI) vs Generative AI

There is a difference between non-generative AI and generative AI. Non-generative AI (predictive/analytical) has the capability to learn from data and make decisions or predictions based on that data such as a computer chess game, voice assistants like Siri or Alexa, Google’s search algorithm, and predictive text in text messages, emails and Microsoft Word. They are trained to follow specific rules but not create anything new. 

Generative AI is the next generation of AI and can learn the underlying patterns in data to generate new data. Examples of generative AI would be Chat GPT, Copilot, Gemini, and DEEPL. 

In general, it is acceptable to use non-generative AI in work and meetings associated with the Centre on Aging, however generative AI may only be used in specific instances. Note that the use of some tools may even be prohibited.  For example, the DeepSeek chatbot is prohibited at the University of Manitoba due to security and privacy concerns.

Reasons for this AI guidance

  1. Fairness
    The use of generative AI may impinge on academic integrity, proprietary and intellectual property, and use information with out permission of the authors.
  2. Copyright
    The use of generative AI may impinge on many copyright laws by using information without permission.
  3. Privacy
    Generative AI is trained using information uploaded to the tool and any information uploaded to the tool may be reproduced and shared with others thereby impinging on privacy and confidentiality.
  4. Aligning with UM and national policies (e.g., Tri-Agencies)
    Many institutions including the University of Manitoba and Tri-Agencies have developed policies around generative AI. By way of this guidance, the Centre on Aging will be aligned with current policies of these institutions.
  5. Ensuring errors and bias are not entering Centre materials and outputs
    Generative AI tools are created based on the information provided in training the tool. This training information may be biased and, in particular, may include ageist language and information. While biases in general are problematic, biases toward older adults is particularly problematic for work and activities at the Centre on Aging. Generative AI tools are also known to include errors, including fabrications, and must be double-checked by humans.

The Centre on Aging has developed guidance for specific activities. See below.

Student and Centre on Aging staff work

(Adapted from: UM Faculty of Social Work Bachelor of Social Work - Guidelines for the Ethical and Responsible Use of Artificial Intelligence in Academic Work)

Students and Centre on Aging staff include those who are employed and paid by the Centre on Aging. This does not include nil-appointed Research Affiliates unless they are working directly on a project with the Centre on Aging.

Centre staff and/or students may use AI technology to gather and organize ideas, as well as refine writing for grammar, provided their final work includes original analysis, synthesis, and critical thinking, with proper citation and acknowledgment of AI use.

AI-generated content cannot be presented as a Centre staff and/or student’s original work, just as copying from any source is not considered original.

Centre staff and/or students must not upload confidential, restricted, copyrighted, or proprietary content to generative AI tools and should verify whether generative AI-related tools require agreements that could compromise confidential information.

Centre staff and/or students are responsible for ensuring compliance with University policies, copyright laws, and intellectual property rights.

Centre staff and/or students are required to keep all research notes, draft versions, and a record of AI interactions. If requested, they must submit this portfolio for auditing, demonstrating the development of their final work and providing evidence of original thought and authorship. Records should include the platform, prompts used, output received, etc.

AI should not be used to process or summarize any confidential, restricted, copyrighted, or proprietary content.

For the preparation of a manuscript, all authors should review the journal author instructions for acceptable use of AI. Generative AI should not be used to substantially develop the manuscript and do not use generative AI to manipulate images. All authors are responsible for ensuring the accuracy of the AI generated material and all references should be checked for accuracy and relevancy. Note that some generative AI platforms create fictional citations.

For research projects, use of any generative AI must adhere to the approved research ethics protocol for each project. 

For students, it is likely best practice to consult with your Centre supervisor before you use generative AI for any work related to your employment at the Centre.

When introducing new AI technologies, consult with the Access and Privacy Office to determine if a Privacy Impact Assessment and/or Threat Risk Assessment is required before project initiation.  

Grant, fellowship and scholarship applications

It is expected that generative AI will not be used to substantially develop sections of the application. Substantive development includes drafting sections, creating data, literature reviews or research arguments. Sentence-level tuning (i.e., grammar, clarity, rephrasing) is permitted as long as the author reviews and remains fully responsible. There are two main components of our generative AI guidance for applications.

  1. All applicants and co-applicants are ultimately accountable for the complete contents of their application.
  2. Privacy, confidentiality, data security and the protection of intellectual property must be prioritized in the development of applications.

These concepts are directly aligned with the values that are essential in the conduct of all activities related to research. These values include honesty, accountability, openness, transparency and fairness and thus apply to the use of generative AI in the preparation of applications.

View more information 

Procedure

Applicants are responsible for ensuring that information included in their applications is true, accurate and complete and that all sources are appropriately acknowledged and referenced. Applicants should be aware that using generative AI may lead to the presentation of information without proper recognition of authorship or acknowledgement. This would be a breach of copyright and constitute plagiarism.

View more information  

Applicants will be required to declare if they used any generative AI tools to help produce their application including writing of a proposal (including research proposals), production of images or graphical elements of the proposal, or in the collection and analysis of data (if appropriate). Uses of AI continue to evolve, and we require that all uses be declared including the use of functional editing (i.e., non-generative AI). If generative AI was used, it is the applicant’s obligation to declare exactly how it was used, which tool was used, and to confirm that the output is factually accurate. Moreover, applicants must make every effort to ensure that the content is free from potential bias that could be introduced by the AI and does not mistakenly contain plagiarized content, which are common risks of generative AI outputs. Other uses of AI that need to be declared include generative AI tools used to enhance or improve the language used in the application. All applicants are responsible for the content in their submission and AI cannot be listed as an author of an application.

View more information 

View more information from the Transportation Research Board Web site.

If generative AI has been used, Modern Language Association (MLA) and American Psychological Association (APA) have established guidelines for when and how to cite the use of a generative AI tool. They state:

  • cite a generative AI tool whenever you paraphrase, quote, or incorporate into your own work any content (whether text, image, data, or other) that was created by it
  • acknowledge all functional uses of the tool (like editing your prose or translating words) in a note, your text, or another suitable location
  • be sure to vet the secondary sources it cites

An AI acknowledgement statement or declaration may include the following:

  • Acknowledge: I acknowledge the use of [AI tool or technology name] and [link] to generate... 
    I have not used any AI tools or technologies to prepare this [report, manuscript, poster, application, review, etc.].
  • Prompt: I entered the following prompt/s...
  • Use: I used the output to.../I modified the output to...
  • Bias: I confirm that the information is free from bias and does not include ageist language and information.
  • Accuracy: I confirm that all [analysis, writing] are my own, and the sources cited are accurate and exist.

Grant, fellowship and scholarship reviewing

Privacy, confidentiality, data security, copyright laws, and the protection of intellectual property must be prioritized in the reviewing of applications. Grants, fellowships and scholarship applications are considered confidential and the information in these applications needs to be treated as such. Reviewers should be aware that inputting application information into generative AI tools could result in breaches of privacy and in the loss of custody of intellectual property. Examples include transmission of application text to online tools such as ChatGPT and DEEPL, which may store and reuse the data for future enhancement of the tool. Therefore, use of publicly available generative AI tools for evaluating Centre on Aging grant, fellowship and scholarship applications is strictly prohibited.

View more information

However, for Centre grant, fellowship and scholarship reviews, the use of generative AI tools in assisting with reviews is allowable for general queries (e.g., what literature is available on a topic or what are appropriate methods for statistical analyses), so long as detailed information from applications is not used in prompts. If reviewers choose to use such tools then it is expected that reviewers will use recommended tools which rely on academic literature (AI for Research - Artificial Intelligence - LibGuides at University of Manitoba), and are mindful to also consult original sources. Reviewers will be asked to identify the use of any AI in their review of applications. 

Grant, fellowship and scholarship adjudication committee meetings will not be recorded or transcribed by any member of the committee.  Given the confidential nature of adjudication meetings in terms of individuals assessed as well as original material being evaluated, this type of recording could infringe upon several of the principles mentioned above (fairness, privacy, copyright, etc.). The use of closed captioning during a meeting would be allowed as an accessibility accommodation.

Meetings

(not for data collection purposes, those would need a specific research ethics protocol)

Adapted from the U.S. Department of Education Web site

The use of generative AI for the transcription (including the capture of meeting notes) or recording of meetings that are connected to the Centre on Aging (COA) is generally permissible unless otherwise precluded by law. However, as noted above under Grants, fellowship and scholarship reviewing, generative AI is not permissible during adjudication committee meetings or other meetings where research proposals are discussed. During meetings where research proposals are not discussed, the following should be considered:

  • Notification: COA staff and/or students should notify (1) all attendees via a written and verbal disclaimer that they are enabling the AI transcription or recording of meetings prior to doing so and, (2) attendees of the ways in which the transcription will be used, including if the transcript will be distributed to meeting participants. When meeting participants decline to participate in a meeting that will be transcribed or recorded, COA staff and/or students may need to find alternative ways to conduct the meeting.
  • Accuracy: After an AI-transcribed meeting has ended, COA staff and/or students should review and appropriately edit the transcript to ensure clarity and accuracy. COA staff and/or students should be aware this process can be time- and resource-intensive.
  • Distribution: As a good practice, COA staff and/or students should offer a reviewed and appropriately edited transcript to all meeting participants after the AI-transcribed meeting has ended.
  • Agency Requests: Under some circumstances, COA staff may require that an AI recording and/or AI transcription function be disabled for meetings.
  • Records Retention: Any transcript or other related records generated by AI (including original and edited transcripts) in a COA meeting must be retained as project or Centre records for a period of at least current calendar year plus six years after which documents can be transferred to archives as per University policy GOV-030 (Council and Committee Files).
  • Privacy: Before using AI transcription or recording services or products, COA staff and/or students should ensure an appropriate understanding of how data is processed and take appropriate steps to protect confidential or personally identifiable information. Generally, meeting participants should avoid discussing personally identifiable information as a safeguard against unauthorized sharing, as well as to make it easier to share the transcripts and to respond to requests to access the transcripts.

Spring Research Symposium posters

It is expected that generative AI will not be used to substantially develop sections of the research poster. Substantive development includes drafting sections, creating data, literature reviews or research arguments. Sentence-level tuning (i.e., grammar, clarity, rephrasing) is permitted as long as the author reviews and remains fully responsible. See the Grant, fellowship and scholarship applications section above as this applies to research posters at the Spring Research Symposium as well. Note all AI use must be acknowledged in the poster. This includes images which have been generated using AI. The figure legend should specify that the image is “AI-generated” and could also list the program/platform used along with the prompt.

Centre on Aging events

Using AI tools to record/summarize/note-take/generate a transcript at Centre on Aging events (presentations, workshops, symposia) is strictly prohibited due to copyright and other issues mentioned above. If accommodation is needed, this should be requested in advance and permission provided.

Editing vs Generative AI

It can be challenging to decide if the AI used is functional editing (non-generative AI) or generating substantive content (generative AI). Consider the following when deciding if your use of AI is acceptable for the work you will be doing.

Does the tool require you to insert a prompt or does it create new and original content? If yes, then it is likely generative AI and might not be an acceptable use. In contrast, using the Editor function in Word, for example, does not require the use of prompts and so would be considered non-generative AI.

When in doubt, check with a Centre on Aging staff member.

Acceptable uses matrix

Document reviews

This document has been reviewed by the Vice-President (Research and International) Office, November 19, 2025.

Document review cycle

This document will be reviewed annually given the rapid changes in AI technologies and regulations.

Resources

APA Style – How to cite ChatGPT.  

Centre for the Advancement of Teaching and Learning – Artificial Intelligence

Interagency Research Funding - Guidance on the use of Artificial Intelligence in the development and review of research grant proposals

MLA Handbook – How do I cite generative AI in MLA style?  

Marr, B. (2023, July 24). The difference between Generative AI and Traditional AI: An easy explanation for anyone. Forbes

Transportation Research Board.

UM Academic Integrity & Artificial Intelligence. 

UM Artificial Intelligence – Guiding Principles. (Oct 1, 2024) UM Committee on Artificial intelligence

UM Common Records Authority Schedule - Governance and Administration: GOV-030 (Council and Committee Files).  

UM Copyright Office – Copyright and Generative AI.  

UM Copyright Office – Research rights in the evolving generative artificial intelligence landscape.  

UM Faculty of Social Work Bachelor of Social Work - Guidelines for the Ethical and Responsible Use of Artificial Intelligence in Academic Work.  

UM LibGuide – AI for Research.  

U.S. Department of Education – Grants and artificial intelligence (AI): Statement on Use of Artificial Intelligence (AI) for the Transcription or Recording of Meetings by ED Grantees.

Contact us

Centre on Aging
338 Isbister Building
183 Dafoe Rd
University of Manitoba (Fort Garry campus)
Winnipeg, MB R3T 2N2 Canada

204-474-8754
Monday to Friday, 8 am to 4 pm