Intro
During my seven-month internship at Advantech, I learned a great deal, and I’d like to share this experience as both a record and a reflection.
I’m deeply grateful to my internship manager Jack, my supervisor Paul, my mentor Joe, and Ryan—the technical expert who took great care of all of us throughout this journey.
In this internship sharing, I will cover three main aspects: my work responsibilities, the company culture, and my overall reflections. I hope this will be helpful to you.
Tech Stack in this internship
Throughout these seven months, the core skills required for my internship included multi-agent system design, recommendation algorithms, and designing end-to-end model training pipelines.
Although I had no prior experience with these topics before joining, I learned primarily by doing. Advantech has a well-established mentorship system, which strongly encourages interns to actively discuss ideas with their mentors. Interns are even allowed to choose projects based on their own interests, and as long as the workload is manageable, there is no strict limit on the number of projects one can take ownership of.
In addition, I reported to my supervisor on a weekly basis, covering the current direction of each project, completed milestones, and upcoming action items.
Main responsibilities
I’ll briefly recap my main responsibilities during my internship at Advantech. However, since work styles can vary significantly across different departments, teams, and managers, the following reflects only my personal experience and is shared as a reference for anyone curious about what working at Advantech might look like.
Competitor Analysis Agent
Advantech is widely known as a leader in industrial computing, and most of its customers are B2B. Even when many customers already lean toward choosing Advantech, sales and PMs still have to spend a significant amount of time doing upfront research—checking competitors’ specs, pricing, and key features, then comparing them item by item against our own products. By the end, they often need to pull in multiple internal teams to validate details through meetings before they can consolidate everything into a report that genuinely persuades the customer. This workflow is critical, but it is also extremely time- and energy-consuming.
That’s why we wanted to build a “Competitive Analysis Agent.” With two modules—Deep Search and Deep Research—we aimed to help the multi-agent system expand both the breadth and depth of information gathering, and conduct targeted research based on specific needs. The goal was to remove the repetitive “start-from-scratch” effort, so sales teams could quickly understand where competitors are strong, where they are weak, and which differentiators of our products are most likely to match customer needs. Beyond ensuring source reliability, I also learned that frameworks like SWOT and Porter’s Five Forces can significantly improve the structure and argumentative strength of the final report: SWOT helps consolidate scattered findings into clear priorities and positioning for Advantech, while Five Forces explains—through industry dynamics—why customers may be convinced under certain conditions. As a result, the output is not just a list of comparisons; it clearly answers “which differences matter and why, how the evidence supports the argument, and what the next steps should be,” making it more persuasive for both internal decision-making and external communication.
In early 2025, the ecosystem of tools and frameworks for multi-agent systems expanded noticeably. Beyond the newly released Manus at the time (which I didn’t expect would later be acquired by Meta), there was also the open-source OpenManus that gained traction on GitHub. I also studied several other frameworks, such as AutoGen, Amazon Bedrock Agent, and Open Deep Research. Initially, I chose CrewAI as the primary implementation framework and invested time in understanding its design philosophy and workflow.
However, after discussions with other interns and senior colleagues, I found that LangGraph better matched our requirements—especially in terms of controllable workflow orchestration and clear state management—so I switched to implementing the competitive analysis agent with LangGraph. Although the exploration cost in CrewAI was not trivial, it helped me build a more complete set of criteria for selecting and designing multi-agent frameworks.
Recommendation System Design & Development
The goal of this project was to improve the customer purchasing experience for the Sales and Product teams, ultimately driving better sales outcomes.
My main responsibilities included:
- Designing and developing the end-to-end pipeline—from data cleaning and model training to deploying the recommendation system in production.
- Designing and implementing the multi-agent architecture.
From an engineering standpoint, I integrated over 30 fragmented APIs, consolidated three major data sources with 4,000+ records, and helped build a recommendation system with a three-stage training workflow. I also contributed to the design of both a single recommendation agent and a hybrid-context recommendation agent, and improved the overall training efficiency of the recommendation model by more than 80%.
Recommendation systems are actually very close to our everyday lives. When you scroll through social platforms like Facebook, Instagram, or Pinterest, much of what you see is algorithmically recommended. The same goes for e-commerce platforms like Shopee or Amazon—recommendations often shape what you click next and what you end up buying.
However, applying the same concept to Advantech’s B2B market is very different. Industrial computing products don’t have short life cycles like FMCG products, upgrades are costly, and the “return” after upgrading is not always worth it. On top of that, many customers are highly loyal and tend to purchase the same items repeatedly. So whether they place orders through Advantech’s official website or an e-commerce channel, a common scenario is simply “buying the same model again”—similar to hitting “Buy Again” on an online marketplace.
Because of this, our recommendation goal wasn’t just to find relationships between products. More importantly, we wanted to extract likely customer purchasing paths from historical sales data: When do customers typically repurchase? Under what conditions would they consider upgrading? Which types of customers are more open to adopting new products? Beyond making product selection smoother and information clearer, we also aimed to identify the right moments in the purchasing journey to introduce newer models—gradually increasing customers’ willingness to upgrade.
From a design perspective, I integrated sales data from three different sources, combined it with customer interaction signals from the website, and aligned requirements with multiple product and sales teams. Different product lines behave very differently: some products are highly specialized, and recommending unrelated items would only confuse customers—so these cases are better suited for “horizontal recommendations” (recommending similar or closely related products). In other cases, customers need vertically integrated solutions—buying a full set of components to complete an application scenario—so the recommendation logic must support an end-to-end, “one-stop” purchasing flow.
Technically, I took over a project originally built by a previous intern. I’m genuinely grateful for the clean codebase and clear structure, which made the handover relatively smooth. Even so, it still took me nearly seven working days to fully understand the entire workflow—from how each API was used, to how data was cleaned and aligned, to how the final recommendations were generated. I needed to run through and validate every step myself before I felt confident making changes.
For model training and inference, we deployed the full workflow on AWS Personalize. During implementation, we faced a range of challenges—such as inconsistent data formats, missing fields, mismatched IDs across sources, inference latency and cost control, result fluctuations caused by prompt/parameter tuning, the need to design new algorithms for specific recommendation cases, as well as batch processing and retry mechanisms. I addressed these issues by breaking the pipeline down step by step and systematically logging the inputs and outputs of each stage, which helped narrow down the problems and gradually stabilize the overall system.
Underlying model training architecture
Recommendation User Needs Integration & Future Planning
不過做著做著我也發現,推薦系統不是單純從工程角度「看資料、算分數」就能解決的。為了讓系統真的符合第一線的使用情境,我和 Mentor 一起跟好幾個部門的 PM、Sales、MKT 開會,把他們遇到的痛點、實際想要的輸出形式、以及使用流程一一整理出來。
As the project progressed, I realized that a recommendation system can’t be built solely from an engineering perspective of “looking at data and computing scores.” To make the system truly fit real-world frontline use cases, my mentor and I met with PMs, Sales, and Marketing teams across multiple departments to systematically gather their pain points, desired output formats, and actual usage workflows.
Building on the original recommendation agent, we further designed a hybrid recommendation agent that adapts to different scenarios and contexts. Instead of relying on fixed rules or a single recommendation path, this agent can adjust recommendations based on the customer’s current needs, product line characteristics, and stage in the purchasing journey—resulting in recommendations that better reflect real on-site decision-making.
This discussion process was extremely valuable to me. Beyond improving my ability to explain engineering problems in ways other departments could understand and translating technical constraints into actionable solutions, I also gained deeper insights from marketing leaders into the differences between B2B and B2C products—particularly in decision-making processes, purchasing cycles, and communication styles. In addition, I strengthened my understanding of the marketing / sales funnel, which helped me better align recommendation logic with the business metrics and workflows that stakeholders truly care about.
Three Things I Learned
1. From One-Off Training to a Pipeline That Can Evolve Long Term
Because a recommendation system needs to be retrained and redeployed regularly, building the training architecture taught me more than just “getting a model to train.” The real challenge was turning it into a workflow that can run reliably over the long term: integrating multiple data sources and APIs, using MLflow for versioning models and datasets, and designing stable exception-handling mechanisms (e.g., missing data, schema changes, API failures or latency) without disrupting the training cadence. This was a meaningful challenge for me—many details that seem minor at first are exactly what determine whether a model can be trusted and adopted by a team over time.
- Data sources & API integration: Unified product master data, customer attributes, and interaction/transaction logs under consistent keys and schema definitions, and standardized data ingestion (DB / API / files) so retraining doesn’t require re-aligning specs every time.
- Re-runnable data processing pipeline: Turned cleaning, deduplication, field normalization, category mapping, and missing-value handling into repeatable steps to ensure consistent data logic across retrains—and reduce “why is this run different from the last one?” issues.
- Feature generation & consistency: Converted raw data into stable, model-ready features (e.g., similar/alternative product relationships, common bundles, segment preferences), and ensured training and inference shared the exact same transformation logic.
- MLflow versioning (models + datasets): Logged model versions, parameters, metrics, and the corresponding dataset versions/splits in MLflow to make results traceable, comparable, and easy to roll back to a stable release when needed.
- Exception handling without breaking cadence: Built guardrails for common failure modes (e.g., missing fields, schema drift, API timeouts/failures), using retries, fallbacks, or skipping non-critical issues so retraining can complete reliably instead of failing due to a single upstream problem.
2. Marketing / Sales Funnel
You can think of a sales funnel as a “route map” that shows how a customer moves from first noticing you to eventually paying. It’s called a funnel because the number of people is large at the top (high exposure/traffic), but it gradually narrows at each filtering step—until only a portion of them convert.
In short, a sales funnel describes the stage-by-stage journey a potential customer goes through, from initial contact to conversion.
A common five-stage version (often explained through an AIDA-style framework) looks like this:
- Awareness (Exposure / Recognition): They see you and know who you are.
- Interest: They’re willing to learn more—clicking into your website or content.
- Consideration (Evaluation): They start comparing options and want to understand solutions, use cases, and pricing range.
- Decision: They book a demo, request a quote, propose internally, and negotiate terms.
- Purchase & Loyalty (Conversion / Retention): They purchase, renew, repurchase, and potentially refer others.
With a funnel, a company can answer questions like:
- Where is the bottleneck?
For example, if exposure is high but conversion is low, the issue might be unclear messaging or a weak CTA (Call To Action).
- Which stage should we invest in first for the highest impact?
It’s not always about driving more traffic. Sometimes strengthening the mid-funnel—especially “evaluation/comparison” content—can significantly lift conversion.
- How can cross-functional teams align on the same language?
Marketing, Sales, and Product teams often speak in different terms. A funnel helps bring everyone back to the same shared map, so they can interpret data consistently and decide next steps together.
3. Technology Choices & Team Collaboration
When it comes to technology selection, I’ve found that a more effective approach is to first build a baseline understanding of the tools and frameworks that are currently popular, and then discuss them with senior colleagues or teammates based on clear requirements and assumptions. After all, work isn’t a study group—people don’t always have the bandwidth to explore a brand-new tool together from scratch. Framing discussions this way keeps the focus clear and avoids chasing trends for their own sake. To me, tech selection isn’t about choosing the “most powerful” option; it’s about whether a tool can support our workflow and be maintained by the team. For example, MCP, which gained traction around mid-2025, was adopted by many teams (including ours). During the learning process, I also kept notes to document my understanding (such as Note: MCP Ep.1 and Note: MCP Ep.2).
Collaboration is also a constant part of daily work—whether it’s aligning in meetings, discussing solutions, or requesting support across teams. During my internship, I developed several habits that proved especially useful:
- Prepare before scheduling meetings: Everyone’s time is valuable. When I need to set up a meeting, I first check availability via tools like Teams, Slack, or Google Calendar, then propose two or three possible time slots. I also send a short message outlining the meeting goal, the questions we want to resolve, and my current thoughts. If there’s relevant material, I attach it in advance so participants can review it beforehand and jump straight to the key points.
- Ask for help with options, not just problems: When I need support or input, I avoid simply saying “I’m stuck.” Instead, I clearly explain the context and include one or two possible approaches or judgments I’ve already considered. This makes it easier for others to give direction or quickly confirm which path is more feasible.
- Use the “sandwich” communication style—without overdoing it: I often use a light sandwich structure: briefly share context and appreciation, clearly state the request and constraints, and end with proposed next steps and how I can support. The goal isn’t politeness for its own sake, but making it immediately clear what I’m asking and how the other person can respond.
- Turn discussions into trackable conclusions: After meetings or solution discussions, I summarize the outcomes in a short written note—what was decided, who owns what, and what the next checkpoint is. This can also be handled by AI meeting note tools. It’s especially helpful for cross-functional collaboration, as it prevents mismatched memories and reduces repeated communication.
- Be clear about response timing: Work isn’t a guessing game. For work-related messages or emails, if I can’t provide a full response right away, I still try to reply with a quick “received” and an estimated time for a more complete update. Even if the task can’t be addressed immediately, acknowledging receipt significantly reduces misunderstandings and follow-up pings.
Culture
Although I was just an intern, I would like to share what I observed at Adantech from an intern’s perspective. Overall, the environment was more open and welcoming than I expected. In the Data Intelligence Team, in particular, interns and full-time employees interacted frequently and collaborated closely.
For me, this was an ideal internship experience: interns were encouraged to proactively join cross-functional meetings, discuss ideas with mentors, and propose project architectures and directions. More importantly, the projects actually made it into production—rather than remaining superficial or purely experimental.
Midterm and Final Internship Presentations
- Advantech takes its internship program very seriously. Senior leaders attend interns’ presentations across different topics, and there are high expectations for both topic diversity and real business impact. In the end, I was honored to receive the Best Practice Award 🎉
Career Sharing Sessions
In addition to the regular career talks across different functions (e.g., UI/UX, R&D) and the sharing session at the final graduation ceremony, I’m also grateful to the HR lead for inviting managers and senior colleagues from our team to take time out of their schedules to share their career journeys and lessons learned—it was genuinely helpful.
Points-to-Food Program
Each month, employees receive roughly NT$900 worth of points, which can be used to redeem meals at the in-house café or some pretty solid dining vouchers. Since you need to stay for a certain period before you can use them, I used mine for breakfast (the bagels at Knock Knock Coffee are really good), coffee, and I also redeemed two weekend dinner vouchers for Xuhui / Xiangxiang.
During my internship, we could also work remotely one day per week, and there was a daily flexible break from 3:30–4:00 PM—you could keep working or take a short reset and do something else to recharge.
Reflection
My original motivation for pursuing an internship was to experience what it’s like to work in a company as an undergraduate. I even felt anxious at one point about not being able to land an internship. After joining, I did get to see what working in a large corporation is really like—its culture, collaboration habits, and day-to-day workflows.
Overall, while an internship can help you earn some money, I genuinely believe it’s okay whether you have one or not. What matters most is knowing how to keep improving your skills and growing your capabilities.
.png%3Ftable%3Dblock%26id%3D295fe6fa-8992-8056-ba0a-e1004e2da609%26cache%3Dv2&w=3840&q=75)