Category: Opinions

  • Is anybody actually writing “Modern C++”?

    If you spend any time watching C++ conference talks, reading the standard, or skimming the Core Guidelines, you could be forgiven for thinking the industry quietly agreed to rewrite itself overnight. The code in those talks is elegant. Expressive. Composed of ranges, concepts,constexpr, value semantics, and a conspicuous absence of raw pointers.

    In college, though, C++ often shows up in a different outfit: data structures, intro systems, maybe compilers or graphics if you are lucky. It is taught as “the language you use to understand memory,” not “the language you use to write elegant libraries.” Students learn pointers early. They learn manual lifetime management. They learn to fear undefined behavior.

    Then you open a real codebase, written by a team that ships products for a living.

    And… yeah. Something is off.

    So let’s ask the uncomfortable question out loud: is anybody actually writing C++ the way C++20 suggests?

    The answer is yes. But not in the way most people mean.

    What “the C++20 Way” Even Is

    When people say “modern C++,” they rarely mean one feature. They mean a style that has emerged over the last decade: code that leans on value semantics and RAII, prefers standard algorithms over bespoke loops, uses strong types to make illegal states harder to represent, and pushes intent into the type system instead of into comments.

    In practice, “the C++20 way” usually looks like a handful of recurring moves. You see

    std::optional

    and

    std::variant

    where older code used sentinel values. You see

    enum class

    where older code used loosely-typed integers. You see concepts where older template code relied on SFINAE and error messages as a rite of passage. You see

    constexpr

    used to make compile-time decisions explicit rather than accidental.

    And, maybe most importantly, you see fewer macros, fewer raw owning pointers, and fewer surprise lifetime rules.

    None of this descended from on high with a standard release. It grew out of guidelines, talks, libraries, and hard-won experience. It is not “how C++ must be written.” It is how C++ can be written when everything goes right.

    Who’s Actually Writing C++ This Way?

    Some people really are writing the C++ from the talks. It is not imaginary. It is just concentrated in a few places.

    Library authors and C++ specialists

    If you work on standard library internals, Boost-like libraries, header-only abstractions, compilers, or tooling, this style is not aspirational. It is necessary.

    These teams spend all day living in templates. They are the reason concepts exist. They treat

    constexpr

    as a feature, not a punishment. Their code looks like the talks because they are the ones giving the talks.

    Greenfield, performance-heavy projects

    New subsystems and high-end projects are the next most likely place to find “conference C++.” Think simulation, robotics, finance, and HPC, where performance and correctness are non-negotiable and the codebase is not dragging decades of baggage.

    Even here, adoption is cautious. Smart pointers become the default, but they do not replace every custom allocator. Expressive types show up, but teams still measure every abstraction. Ranges do not replace every loop. Modules exist mostly in slide decks. Compile-time complexity is treated as a real cost.

    It is modern, but it is modern with a budget.

    Teaching environments

    Academia can produce the cleanest C++ precisely because it can ignore so many constraints. When you are not supporting four platforms, three compilers, and one vendor-patched standard library, you can teach the simpler story: avoid raw owning pointers, express intent in types, and let the compiler enforce invariants.

    Students often learn a cleaner C++ than they will see in their first job.

    That is not a bug. That is a north star.

    Who Mostly Isn’t Writing C++ This Way?

    If “conference C++” were the median, this essay would not exist. Most production C++ lives under constraints that talks rarely linger on.

    Legacy codebases

    Millions of lines of pre-C++11 code do not get magically modernized because a new standard dropped.

    Modernization is usually incremental and local. You modernize at boundaries, where it is safe. You introduce a new type here, replace a few ownership patterns there, move a subsystem toward a newer dialect when you can, and leave the rest alone unless you enjoy production outages.

    These teams live in a hybrid world: “modern where possible, old where required.” And that is often the correct choice.

    Cross-platform product teams

    If you support old compilers, niche platforms, embedded targets, or vendor-patched libraries, you cannot assume full C++20 support.

    So you end up writing C++17-ish core logic, adopting a few C++20 features where they help, and building careful fallbacks. This is not conservatism. It is survival.

    Game studios

    Games deserve special mention because they are often very modern, but almost never ideological.

    Game teams care about debuggability, build times, and predictable performance. They use RAII everywhere. They use smart pointers where they fit the engine’s ownership model. Exceptions are often banned. Heavy abstractions and ranges show up selectively, because the cost is paid by every developer who has to build, debug, and profile the game.

    Game C++ looks modern, but it does not look like a conference slide.

    And it should not.

    The Uncomfortable Truth

    The C++ standard does not describe how people do write code. It describes how people could write code if starting today, with no legacy constraints, excellent toolchains, uniform compiler support, and deeply trained developers.

    Real codebases are messier, older, and more political than that.

    What’s Actually Happening in Practice

    C++20 has not become doctrine. It has become a toolbox.

    Teams adopt what pays for itself. They take the features that reduce bugs. They take the features that clarify intent. They take the features that do not explode compile times, onboarding, or debugging.

    And they ignore the rest. Sometimes that is shortsighted. Often it is wise.

    This is not failure. It is engineering.

    A Hot Take Worth Saying Out Loud

    C++20 is not a style guide. It is a pressure gradient.

    New code drifts toward it. Old code resists it. Great engineers use it deliberately. Poor engineers misuse it and blame the language.

    The difference is not the standard.

    It is judgment.

    A Note for Educators

    If you teach C++ “the C++20 way,” you are doing the right thing, as long as you are honest about reality.

    Teach it as the ideal we aim for, not the average codebase someone will inherit. The students who understand that distinction tend to become better engineers faster, because they learn both the direction of travel and the constraints that slow it down.

    Final Thought

    The right question is not “Is anyone writing C++ like the standard suggests?”

    It is: “Which parts of modern C++ meaningfully improve this codebase?”

    That is the question professionals actually answer every day.

    And it is the real lesson C++20 teaches.

    Fair warning: I have strong opinions 😄

    Selah.

  • Online Education: Changes in Attitude, Changes in Modality

    For nearly three decades, online education has centered on one question: Is it better to teach online through real-time sessions or self-paced modules? The answer has shifted dramatically—and today’s research reveals a far more nuanced picture than early “either/or” debates suggested.

    When universities first moved courses online in the late 1990s and early 2000s, asynchronous instruction dominated. Limited bandwidth and early LMS platforms like Blackboard and WebCT made videoconferencing unreliable. Scholars such as Michael Moore—whose Transactional Distance Theory shaped early distance-ed thinking—emphasized that effective online learning required bridging the psychological gap between teacher and student through well-structured materials and meaningful dialogue, most of which happened via text-based, time-flexible tools.

    By the mid-2000s, research consistently found that asynchronous environments supported reflection, deeper discussion, and learner autonomy (Hrastinski, 2008). Asynchronous wasn’t just convenient—it was considered the gold standard for well-designed online education, especially for adult learners balancing work, family, and school.

    As broadband expanded and tools like Zoom, Adobe Connect, and Collaborate improved, synchronous online learning became more viable. Studies throughout the 2010s showed that live sessions boosted social presence, helped students maintain momentum, and let instructors respond to confusion in real time. Research also began suggesting that blended approaches—combining asynchronous materials with occasional synchronous meetings—could outperform either model alone.

    Still, attitudes remained cautious. Most fully online programs stuck with asynchronous delivery because it scaled well and met the needs of working adults. Synchronous formats were often seen as a nice addition, not a core design element.

    The Pandemic: Synchronous Goes Mainstream

    COVID-19 changed everything. Practically overnight, Zoom became the global classroom. For many students and instructors, synchronous online learning was their first experience with online education of any kind.

    Research from 2020–2022 revealed several key themes:

    • Students valued structure. Synchronous classes provided routine during a chaotic period.
    • Faculty found real-time teaching easier to manage than designing robust asynchronous modules on short notice.
    • Zoom fatigue was real, and bandwidth limitations, childcare demands, and time zone conflicts disproportionately affected lower-income and rural learners.
    • Students in hastily converted asynchronous courses often felt isolated or under-supported.

    Most importantly, the pandemic normalized synchronous online learning at an unprecedented scale. Many students—especially traditional undergrads—discovered they prefer some real-time interaction online.

    Post-Pandemic Research: Changes In Attitude

    As researchers have examined online learning beyond the “emergency remote” context, a clear pattern has emerged: neither synchronous nor asynchronous online education is inherently superior. Each offers distinct advantages.

    Asynchronous strengths:

    • Maximum flexibility
    • Self-paced, repeatable content
    • Deeper opportunities for reflective engagement

    Synchronous strengths:

    • Immediate feedback and clarification
    • Stronger sense of connection and accountability
    • Better fit for discussion-heavy or skills-based courses

    Recent meta-analyses find similar learning outcomes across both formats when courses are intentionally designed. Where differences appear, they relate more to student characteristics (e.g., work schedules, self-regulation skills) and course type than to modality.

    The most important post-COVID trend is the rise of “bichronous” online learning—a term coined by Martin, Sunley, and Turner (2020) to describe courses that intentionally blend both modes. Students complete core content asynchronously while engaging in targeted synchronous sessions for problem-solving, discussions, or community building. Recent studies show high satisfaction with these hybrids, especially when synchronous time is used strategically rather than habitually.

    Generative AI: Killer or Savior of On-line Education

    Generative AI is seen by some as the death of asynchronous online education—and by others as its savior. Asynchronous courses rely heavily on content, self-directed learning, and delayed interaction.

    The loudest criticism of this approach is that it makes students feel alone, unsupported, and disconnected. But chatbots integrated into the LMS—serving as first-line tutors and discussion thread guides—can provide real-time support in a non-real-time environment.

    At the same time, AI is disrupting assessment and causing concern among faculty over academic integrity. The tools make it easy for students to cheat by auto-generating essays, code, and short answers.

    The response has been a move toward more open-ended assessments—projects, reflections, case studies, and multimedia submissions—where AI becomes a tool rather than a shortcut. Faculty are also adopting process-based assignments like think-aloud activities, staged coding tasks, or version-controlled writing.

    On the flip side, AI makes life easier for instructional designers. Asynchronous courses require tight, highly structured, and carefully planned materials. Here, AI’s ability to rapidly generate and update content—while creating multiple versions of the same concept—lowers the barriers to quality design.

    We see AI not as the “killer” of asynchronous online education but as the “enabler” and “accelerator” of bichronous online education. Tools for text, image, and video generation create new opportunities.

    The Take-away

    For nearly three decades, the distinction between synchronous and asynchronous learning has shaped online education—from course design to student expectations. This evolution began in the bandwidth-limited era when asynchronous experiences defined quality online learning. It continued through the rise of robust synchronous platforms in the 2010s, then accelerated dramatically when the COVID-19 pandemic pushed real-time online instruction into the mainstream. Today, research shows that neither modality is inherently superior. Each offers unique strengths, and students increasingly prefer thoughtful blends that balance flexibility with meaningful interaction. The future lies in treating synchronicity as a design choice rather than a philosophical divide.

    AI amplifies this shift by transforming asynchronous learning itself. Intelligent tutors, adaptive feedback systems, dynamic content generation, and AI-integrated assessments reduce the isolation traditionally associated with self-paced courses while elevating personalization and rigor. At the same time, faculty must rethink assessment integrity and guide students in responsible AI use. Asynchronous education is no longer simply “anytime learning”—supported by AI, it’s becoming interactive, adaptive, and deeply student-centered. Together, these trends signal a future where online education is shaped not by the constraints of time, but by intentional design, learner support, and strategic use of new technologies.

    Selah.

  • How Third-Party Course Content Is Destroying Higher Education

    Colleges and universities built their reputations on the quality of their teaching and the expertise of their faculty. A degree meant you had learned from scholars who designed, tested, and refined the very curriculum that carried the institution’s name. But in recent years, this foundation has been quietly eroded by the rise of third-party course content providers—companies that package “ready-to-teach” online courses for universities to rebrand as their own.

    At first, this outsourcing looked like convenience. Today, it’s corrosion.

    1. The Erosion of Academic Integrity

    When a university licenses pre-made courses, it gives away its most sacred academic function: curriculum design. Faculty once spent months shaping syllabi to fit local program outcomes, student needs, and institutional missions. Now, many are handed “turnkey” shells built by strangers—often containing outdated information, no local context, and little alignment with departmental standards.

    This undermines the authenticity of the university’s promise. Students think they are learning from that university’s faculty, but in truth they are completing a commodity course produced by a contractor. The result is a diploma that increasingly reflects a licensing relationship, not an educational experience.

    2. Faculty Deskilled, Then Replaced

    Third-party content de-skills faculty. Once instructors are told to “facilitate” someone else’s course rather than create their own, they cease to be educators and become content proctors. Their authority over learning design, assessment, and even grading can be stripped away through automated quizzes and publisher rubrics.

    Eventually, administrators notice that if a course can be taught by anyone following a script, it can also be taught by no one—or by the lowest-cost adjunct available. The business model’s logic leads inexorably to layoffs, consolidation, and the hollowing-out of the academic profession itself.

    3. Students Lose the Human Element

    Education is not the same as content delivery. Learning happens through mentorship, intellectual friction, and local context—when faculty connect a concept to a community, a region, or a student’s lived experience.

    Third-party vendors flatten that richness into generic modules designed to scale across thousands of institutions. A course on “Introduction to Business” becomes a cookie-cutter PowerPoint set with no awareness of the local economy, no discussion of regional industries, and no dialogue with students’ realities.

    Students sense this disconnect. Surveys repeatedly show that learners in pre-packaged online courses feel less engaged, less connected, and less confident in their instructors’ expertise.

    4. The Corporate Capture of the Curriculum

    Outsourcing curriculum means outsourcing values. Third-party content providers are not accountable to faculty senates or accrediting bodies in the same way universities are. Their incentives are commercial, not educational.

    When companies determine what students learn—and universities merely rent that content—the door opens for subtle corporate bias. Which case studies are used in a business course? Which programming languages are prioritized in a computer science module? Which health data examples are selected in a nursing simulation? Each of these choices embeds an ideology of the marketplace, not of the academy.

    5. The Path Forward: Reclaiming Academic Sovereignty

    Universities must rediscover what made them trusted in the first place: faculty governance, curricular integrity, and intellectual independence. That doesn’t mean rejecting all collaboration—it means controlling it.

    Partnerships with vendors can be tools, not replacements. Faculty should lead course design, adapting external materials where appropriate but ensuring that institutional mission and local expertise remain at the center. Accrediting agencies and state boards should require disclosure when third-party content exceeds a certain percentage of a degree program. Students have a right to know when their “university course” was written by someone who has never set foot on campus.

    If higher education fails to reclaim authorship of its own curriculum, it will become a branding service, not an intellectual community.

    Closing Thought

    The crisis is not about technology or convenience—it’s about ownership of knowledge. When universities surrender that ownership to third-party content companies, they trade centuries of academic tradition for a subscription plan. The result is an education that looks like college but feels like customer service.

    It’s time to take the curriculum back.

    Caveat: This post was edited with the assistance of AI research and editing tools but all opinions expressed are the opinions of the author.

    As always, solely the opinions of the author, your mileage may vary, standard disclaimers apply.

    Selah.