Tools for Academic successes
2026-01-27
Working collaboratively
Collaborative Software: MS vs Google
Studying online:
Online Search Tools:
\[\left\{\matrix{ \bf Registration\\ \small self-registration\\ \small auto\ registration\\ }\right\}\rightarrow\left\{\matrix{\bf Login\\ \small email\\\small password\\}\right\}\rightarrow\left\{\matrix{\bf Courseware\\ \small Announcements\\ \small Assignments\\ \small Handouts\\ \small Slides\\ \small Discussion\ Groups\\ \small Gradebook}\right\}\]
MS OFFICE ONLINE
GOOGLE DOCS
Moodle - lms2.payap.ac.th
Canvas Instructure
Classrooms
LMS
Bing Chat: copilot.com - General Text and Image
Google: gemini.google.com - Text and software development
Claude: https://claude.ai - Communication of text and data
ChatGPT: Open AI https://chatgpt.com - Cutting edge LLM and agents
Grok: https://grok.com - Removal of social filters
Deepseek - 深度求索 https://www.deepseek.com - Small Chinese LLM
Generate ….
Tell me about ….
Imagine that ….
Act as if ….
Do you know about ….
Ask the model clear, precise and specific questions
Keep ChatGPT on Point
Be articulate
Be patient to build a tested context
Provide enough good information
Offer specific directions
Common spelling, word usage, and word order
Tone, rhyme and rhythm
Music, narration, animation and illustration
AI Hallucinations: LLMs were not designed to be fact retrieval engines. They work by predicting the probability of the next word in a sequence. LLMs may produce outputs that are factually incorrect, nonsensical, or entirely fabricated.
Model Bias: LLMs are built using large bodies of text, often scraped from the Internet. This data contains bias that LLMs can learn and propagate. LLMs can give responses that are biased or disparaging or provide responses of worse quality for certain subgroups.
AI Privacy Concerns: LLMs can leak or inadvertently disclose personally identifiable information or other sensitive or confidential details.
Toxic, Harmful, or Inappropriate Content: LLMs are capable of creating toxic, harmful, violent, obscene, harassing, and otherwise inappropriate content.
AI Copyright Infringement And IP Risks: LLMs are often trained with copyrighted data and thus can generate content that is identical to or similar to copyrighted material. They can also leverage materials online such as a person’s tone or voice to create highly similar content to what that person might have generated.
Security Vulnerabilities: LLMs have increased the surface area for security risk. Feedback cycles create an identity and profile.
Dependance and reduction of discretion: The speed and general accuracy cloud judgement and reduce long term retention.
\[\Huge ???\]