Sign in to view Robert’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New York City Metropolitan Area
Sign in to view Robert’s full profile
Robert can introduce you to 10+ people at BlackRock
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
1K followers
500+ connections
Sign in to view Robert’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Robert
Robert can introduce you to 10+ people at BlackRock
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Robert
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Sign in to view Robert’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
About
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Experience & Education
-
BlackRock
********* ********* * *********** * ****** *********
-
***
****** ***** ** ********* * *********
-
***********
********* **** ********** *** *********
-
********** ** **********
** ********* undefined
-
***** **** **********
*** *******
View Robert’s full experience
See their title, tenure and more.
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View Robert’s full profile
-
See who you know in common
-
Get introduced
-
Contact Robert directly
Other similar profiles
-
Jerry Binkley
Jerry Binkley
Business Reporting Analysis Leaders in Seattle WA
30K followersEast Wenatchee, WA -
Sarah Fossen
Sarah Fossen
Minnesota Citizens for the Arts
9K followersGreater Minneapolis-St. Paul Area
Explore more posts
-
Buddy Wiseman-Barker
Hudson Labs • 2K followers
Not all guidance is created equal. Some is explicit. Some is buried in management commentary. Companies quietly adjust revenue ranges, margin expectations, demand commentary, or segment outlooks - sometimes just a sentence or two that changes the entire forward story. Hudson Labs has a proprietary model designed to detect ALL forward looking statements made in an call. - Guidance revisions (raised, lowered, or reframed) - Subtle outlook changes in management commentary - The catalysts likely driving those revisions - Similar shifts happening across peers and industries Instead of manually digging through transcripts and filings, you can immediately see what changed, why it changed, and where it might show up next. Because the market doesn’t move on the past quarter. Below is an example on ORCL (Oracle), who reported last night. https://lnkd.in/eD_4EdFh
11
1 Comment -
Nurtekin Savas
PayPal • 9K followers
Great representation of Boston College - Woods College of Advancing Studies in Open Data Science Conference (ODSC) The top topic covered in the conference was, of course, agentic AI. Some interesting take aways: - Agentic process automation will replace RPA - Get comfortable with probabilistic outcomes and develop guardrails accordingly - Current guardrails are at the model level, we need agentic use case guardrails - technical, business and regulatory levels - A better agent design has clear objectives, is specialized vs generalized, has guardrails, focuses on task design, and is iteratively refined - Common challenges of agentic AI include hallucinations (llm as judge), coordination (a2a, acp, MCP), cost, reliability, guardrails, and security - Entitlements for AI agents is very important and the framework needs to be developed - ie which agent has the right to ask what questions - Knowledge graphs will be the glue between agentic AI and tabular data / databases - There are already thousands of MCP servers you can start using
29
2 Comments -
Erin Davison Medeiros
Vision Insights • 630 followers
We're facing more and more questions about the use of synthetic data in market research. I'd recommend this blog post, which really resonated with me. So much of the conversation is "how closely can this replicate human data?" But there are so many other considerations researchers should be thinking about. https://lnkd.in/eSQfSTPj
31
3 Comments -
Bilal Mahbub
Bank OZK • 1K followers
Moody’s Analytics reports that 21 states and Washington, D.C. are either in recession or at high risk of entering one, highlighting growing economic fragility across the U.S. States like California, New York, and Illinois are already seeing contraction, while others including Texas and Florida are at elevated risk. Analysts point to slower job growth, weakening consumer spending, and cooling labor markets as key warning signs
8
1 Comment -
Jackie Guthart
Radius • 998 followers
I tried Anthropic’s new Claude Interviewer demo released on 12/4/2025. It’s a quick adaptive AI-led interview that follows up on your answers in real time. Not available for fielding your own research yet, but it’s a clear look at the future of data collection. What caught my attention is the gap between how useful people say AI is vs. how they feel about it. People love the time savings. They don’t love the stigma, the trust issues, or the uncertainty about what this means for their future. Scientists in particular questioned whether AI is actually a net time saver once you verify everything. This aligns with what we’re seeing across the industry. The value is there. The comfort level isn’t always. I also pulled the key findings into a quick infographic below, created in NotebookLM. My demo run wasn’t perfect (Claude froze when I asked it for example answers), but even with bugs, the direction is obvious. AI is getting close to acting like a qualitative researcher: asking follow-ups, probing, adjusting, and synthesizing themes at scale. Would you rather give feedback to an adaptive AI interviewer or a static programmed survey? Demo: https://lnkd.in/ezbSAdXE Study Results: https://lnkd.in/egB7qerT Dataset: https://lnkd.in/eayhQdgv #AI #Anthropic #UserResearch #MarketResearch #Insights
27
-
Peter Hafez
Bigdata.com • 8K followers
I'm excited to share new research from RavenPack's Data Science team examining how sentiment during earnings call Q&A sessions drives subsequent equity returns. The most valuable signals often emerge from the dialogue itself, the nuance in analyst questions and management's tone in response. This study quantifies that intuition. Key findings: • Negatively-perceived analyst questions: 410 bps annualized spread • Negatively-perceived management answers: 370 bps annualized spread • Information ratios of 0.78 and 0.75 demonstrate signal strength The analysis leverages RavenPack's Transcripts Annotations, which enable sentence-level sentiment isolation within Q&A sections, precision that's critical for this type of research. For those working at the intersection of language and markets, the full paper is available here: https://lnkd.in/dAWn8xB6 #Altdata #EarningsCalls #Alpha #NLP #AIinFinance #Finance #Investing #Quant
39
-
Rabah Iberraken
BNP Paribas CIB • 1K followers
Statistical Value ≠ Practical Value In my last post, a t-test helped debunk a new teaching method—what looked like a win wasn’t statistically meaningful. This time, we flip the story. A z-test says an A/B test result is a huge win. The p-value is microscopic. But does it actually matter? Let’s break it down: We’re testing whether a new webpage improves conversion: • Historical conversion: 9.5% • New version: 10% • Sample size: 1,000 • Known population standard deviation: 2% What’s a z-test? A z-test checks if a sample mean is significantly different from a known population mean, assuming a known standard deviation. In A/B testing, you might assume a known SD based on prior experiments or big datasets. That lets you use a z-test to evaluate the difference. What does 2% SD mean? It means conversion rates typically vary by ~2 percentage points from noise—based on historical data, not this sample. If your baseline is 9.5%, your observed rates might fall between 7.5% and 11.5%. That 2% captures natural fluctuation, not user-level variance. Z-test formula: Z = (x̄ - μ₀) / (σ / √n) Where: x̄ = sample mean (10%) μ₀ = null hypothesis mean (9.5%) σ = population SD (2%) n = 1,000 Standard error: SE = 0.02 / √1000 ≈ 0.000632 Z = (0.10 - 0.095) / 0.000632 ≈ 7.91 That’s far beyond the 1.645 threshold (for 95%, one-tailed). Conclusion: The z-test says the difference isn’t due to chance. It’s “statistically significant.” So business might say: “The new version works!” or “We’ve got a winner!” But let’s pause: • 10% of 1,000 = 100 users • 9.5% = 95 users • Net gain = 5 users Statistically significant ≠ Practically significant The irony? The new result (10%) is only 0.25 SDs above the mean: Z = (0.10 - 0.095) / 0.02 = 0.25 But with a large sample size, the standard error shrinks—making even tiny effects look massive: Z = 0.005 / 0.000632 ≈ 7.91 The punchline: The observed rate is well within expected variation, but the test sees it as highly significant. More data ≠ more meaning. Z-test assumptions: • You know the population SD • Sampling distribution is normal (fine if n is large) • Observations are independent Bottom line: P-values don’t pay the bills. A “significant” result might not mean anything in practice. Ask yourself: • What’s the effect size? • Is it worth implementing? • Does it matter at scale? Ever seen a statistically significant result that changed nothing?
1
-
Ikhwan Muhammad
CAMP Investment Technologies • 2K followers
The more I use Claude Code the more it reminds me of the feeling I had when I first used Tableau 10 years ago. Both lowered the barrier to something that was previously gated behind technical expertise. Tableau allowed analysts without a CS background to build professional dashboards that once required a developer. Claude Code allows people to ship things that once required a programmer. But here's what I've also observed, just as with BI tools, the foundation still matters. The analyst who worked through Excel beyond just formulas, actually using pivot tables, Power Pivot, VBA, connecting spreadsheets to databases like MS Access, picked up Tableau and Power BI faster than most, and did significantly more with it. The tools amplified what they already understood, not replaced it. The same dynamic is playing out with AI. Systems thinking, architectural intuition, knowing how components fit together, these still translate into a tangible edge when vibe coding. AI doesn't eliminate the advantage of strong fundamentals; it just shifts where that advantage surfaces. What changed is not whether expertise matters, but where it lives. BI platforms removed the need to write JavaScript or PHP for dashboards. But in their place came new requirements: data security and governance, row-level security, data modelling best practices (star, snowflake, or constellation schema), and knowing how to build dashboards that are insightful without being resource-heavy. AI tools are doing the same for software creation. Writing code line by line becomes less of a bottleneck. But software architecture, security principles, and designing systems that scale, these become more critical, not less. The skill floor drops; the skill ceiling rises. There's also a natural progression in both domains. In BI: Looker Studio → Tableau/Power BI → Looker/Holistics (as it require code based data model). In AI: ChatGPT → Perplexity → Claude → Claude Code → Claude API/Openclaw. The ceiling keeps rising with each step. BI platforms enabled data democratization. AI tools are enabling creation democratization.
49
2 Comments -
Mike Pacitto
iM Global Partner • 5K followers
Something big is happening in managed futures, and it’s reshaping the conversation for allocators. The high level of performance variance we’ve seen over such a short time period this year between managers is truly historic. And as Andrew Beer explains in this month’s update, this isn’t just a one-off anomaly – it’s a sign of a long-term structural challenge in the industry: ☑️ The “complexity arms race” in CTA strategies has driven up implicit trading costs. Those costs don’t appear in a prospectus, but they do appear in returns. ☑️ The replication approach for DBMF is deliberately efficient, focusing on what we believe are the most liquid futures markets to potentially save on implementation costs. ☑️ That efficiency has been a key driver of DBMF’s structural alpha, and why we believe replication has consistently outperformed traditional CTA hedge funds and mutual funds. 🎯 If you’re an analyst looking at managed futures, the question is: which approach is built to deliver the best combination of alpha-generation and true diversification benefits most efficiently over time. Over the long term, DBMF continues to deliver: +6.49% annualized return since inception (5/7/19), 300+ bps ahead of the SocGen CTA Index, 425+ bps ahead of the Morningstar peer average, 530+ bps ahead of the Bloomberg AGG Full July update with Andrew here 🎥 https://lnkd.in/ggCk-jGK #AssetManagement #ManagedFutures #Replication #CTA #ETF #ActiveManagement #HedgeFunds #MarketingCommunication 𝐅𝐨𝐫 𝐬𝐭𝐚𝐧𝐝𝐚𝐫𝐝𝐢𝐳𝐞𝐝 𝐩𝐞𝐫𝐟𝐨𝐫𝐦𝐚𝐧𝐜𝐞, 𝐜𝐥𝐢𝐜𝐤 𝐡𝐞𝐫𝐞: https://lnkd.in/gvZCAA7u Performance data quoted represents past performance. Past performance does not guarantee future results. The investment return and principal value of an investment will fluctuate so that an investor’s shares, when redeemed, may be worth more or less than their original cost. Current performance of the fund may be lower or higher than the performance quoted. Performance data current to the most recent month end may be obtained by calling 888-898-1041. 𝑻𝒉𝒆 𝑭𝒖𝒏𝒅’𝒔 𝒊𝒏𝒗𝒆𝒔𝒕𝒎𝒆𝒏𝒕 𝒐𝒃𝒋𝒆𝒄𝒕𝒊𝒗𝒆𝒔, 𝒓𝒊𝒔𝒌𝒔, 𝒄𝒉𝒂𝒓𝒈𝒆𝒔, 𝒂𝒏𝒅 𝒆𝒙𝒑𝒆𝒏𝒔𝒆𝒔 𝒎𝒖𝒔𝒕 𝒃𝒆 𝒄𝒐𝒏𝒔𝒊𝒅𝒆𝒓𝒆𝒅 𝒄𝒂𝒓𝒆𝒇𝒖𝒍𝒍𝒚 𝒃𝒆𝒇𝒐𝒓𝒆 𝒊𝒏𝒗𝒆𝒔𝒕𝒊𝒏𝒈. 𝑻𝒉𝒆 𝒔𝒕𝒂𝒕𝒖𝒕𝒐𝒓𝒚 𝒂𝒏𝒅 𝒔𝒖𝒎𝒎𝒂𝒓𝒚 𝒑𝒓𝒐𝒔𝒑𝒆𝒄𝒕𝒖𝒔𝒆𝒔 𝒄𝒐𝒏𝒕𝒂𝒊𝒏 𝒕𝒉𝒊𝒔 𝒂𝒏𝒅 𝒐𝒕𝒉𝒆𝒓 𝒊𝒎𝒑𝒐𝒓𝒕𝒂𝒏𝒕 𝒊𝒏𝒇𝒐𝒓𝒎𝒂𝒕𝒊𝒐𝒏 𝒂𝒃𝒐𝒖𝒕 𝒕𝒉𝒆 𝒊𝒏𝒗𝒆𝒔𝒕𝒎𝒆𝒏𝒕 𝒄𝒐𝒎𝒑𝒂𝒏𝒚, 𝒂𝒏𝒅 𝒊𝒕 𝒎𝒂𝒚 𝒃𝒆 𝒐𝒃𝒕𝒂𝒊𝒏𝒆𝒅 𝒃𝒚 𝒄𝒂𝒍𝒍𝒊𝒏𝒈 800-960-0188 𝒐𝒓 𝒗𝒊𝒔𝒊𝒕𝒊𝒏𝒈 𝒘𝒘𝒘.𝒊𝒎𝒈𝒑𝒇𝒖𝒏𝒅𝒔.𝒄𝒐𝒎. 𝑹𝒆𝒂𝒅 𝒊𝒕 𝒄𝒂𝒓𝒆𝒇𝒖𝒍𝒍𝒚 𝒃𝒆𝒇𝒐𝒓𝒆 𝒊𝒏𝒗𝒆𝒔𝒕𝒊𝒏𝒈. A commission may apply when buying or selling an ETF. The iMGP DBi Managed Futures Strategy ETF is distributed by ALPS Distributors, Inc.
31
1 Comment -
Berj Kazanjian
Storytell.ai • 2K followers
McKinsey now has 25,000 AI agents, which is more than the population of many towns. Most of these agents were added in less than two years. AI is not just on its way to consulting; it is already here, changing how work is done and who does it. This matters because it shows what the next era of professional services will look like: faster, more streamlined, and focused on people working with agents instead of against them. Here are 5 key points that stood out: 1* McKinsey now has 60,000 workers in total, with 25,000 of them being AI agents. 2* The company plans for every employee to be paired with at least one agent within the next 18 months. 3* Work driven by AI already accounts for 40% of McKinsey’s business, thanks to QuantumBlack. 4* The company now looks for people who can think both like consultants and engineers. 4* The business model is shifting from giving advice and making slide decks to focusing on shared results and AI-driven change. This is a massive shift. It turns consulting into something closer to: A transformation partner, A systems integrator, A co‑owner of outcomes, and A builder/operator of AI workflows! It also means AI agents aren’t just internal productivity tools, they’re part of the product McKinsey sells!! Here's My take: This is not about replacing people. It is about removing the slow, manual parts of the job that used to hold teams back. The firms that succeed will treat AI agents as real teammates and help their people work at a higher level. Talent will be measured less by background and more by adaptability, curiosity, and the ability to work well with intelligent systems. This is the future happening right now. My advice, as always, is to learn to use AI now, don't wait until tomorrow, it might be too late then!!! #AITransformation #FutureOfWork #ConsultingIndustry #AIAgents #McKinsey #QuantumBlack #DigitalConsulting #WorkforceInnovation #AIInBusiness #ProfessionalServices #TechStrategy #BusinessTransformation #AIProductivity #HumanAndAI #ConsultingCareers #AutomationTrends #AIAdoption #EnterpriseAI #LeadershipInsights #FutureSkills #LearnAI #LearnToUseAI #AITransformation #AIAdaption #AIFirst #BCG #AIEnabled https://lnkd.in/eeUapHgB
29
4 Comments -
Andrew Davidson
Mintel • 9K followers
Introducing the ACC. Last week I asked whether American Express would surpass Capital One’s record marketing spend in Q4 2025. This week we got the answer. Capital One led the quarter at $1.9 billion. Amex came in at $1.6 billion. Chase reported $1.5 billion. These three now operate in a different league, with quarterly marketing investment that far exceeds the rest of the industry. Together they form a new category in credit card marketing. ACC. Amex, Capital One, Chase. Amex also delivered a record $6.25 billion in full-year marketing spend and signaled low single-digit growth that will put them at around $6.5 billion in 2026. Capital One set a new quarterly high as it integrates Discover. Chase continues to invest, setting its own annual record. The three issuers combined represent a massive $18 billion in annual marketing spend. The ACC has emerged as the dominant tier of marketing spend in the credit card industry. 💡 The ACC is creating a new competitive reality. Amex, Capital One, and Chase now spend at a scale unmatched by the rest of the market, with multi billion dollar quarters and a combined $18 billion in annual marketing investment. Their size, momentum, and commitment to customer acquisition put real distance between themselves and the rest of the field. 💡💡 Smaller issuers must compete differently. Matching ACC spending is not realistic, so the path forward is targeted, scrappy execution. That means precision marketing, clear value propositions, faster test and learn cycles, and leaning into pockets of opportunity where scale alone does not determine the winner. #ACC #marketing #advertising #competition
42
5 Comments -
Leon Barsoumian
Equitable • 2K followers
Marketing measurement is evolving faster than most teams can keep up, and this week’s Association of National Advertisers Measure Up Boston conference made that clear. A few themes that stood out to me: • Marketing only drives value when it ties all the way to financial outcomes. • Experimentation isn’t a pilot anymore; it is the operating model. • CLV, identity, attribution, and testing are becoming the real competitive levers. • Most ads don’t move the needle, so focusing on incrementality matters more than ever. My takeaway: It’s not about doing more analytics; it’s about focusing on the few questions that truly move the business. And as a bonus, it was great catching up with former colleagues George Sargent and Bre Rossetti. Thanks for hosting in your new space! What shifts are you seeing in how organizations measure impact today?
26
1 Comment -
Luke Tilley
M&T Bank • 3K followers
The Fed stopped reducing its balance sheet (a process sometimes called "quantitative tightening" or "QT") on December 1, and recent behavior of short-term interest rates show the reason. Some overnight rates started drifting above the top of the Fed's target range around mid-October. The Fed's official interest rate, the federal funds rate (green), has remained within the range, but is subtly drifting up along with others. This was a clear signal to the Fed that QT needed to end. #Fed #QT #interestrates #markets
20
2 Comments -
Singri Goutham
HSBC • 842 followers
We are leaving retrieval performance on the table by treating documents as flat sequences of text. When you split a #financial report or a technical manual every 500 words, you inevitably cut through the middle of an idea. you might separate the problem from the solution just because the word count reached a limit. This leads to retrieval failures where the model misses the context entirely. A recent approach regarding Hierarchical Text Segmentation proposes a smarter, bottom-up #architecture that solves this without inflating costs. Instead of blind cutting, the model functions like a human reader. First, it identifies natural semantic breaks where the topic actually changes (#segmentation). Second, it groups these related segments into broader themes (#clustering). The real value for business applications is in the "Dual-Vector" retrieval strategy. The system indexes both the specific segment and the broader cluster. If a user asks a specific question like "Who signed the audit?", the system matches the specific segment #vector. If they ask "What is the overall risk profile?", it matches the broad cluster vector. You get the best of both worlds: granular precision and high-level context. #RAG #chunking #GenAI #AIEngineer #Banking #ML
5
-
Kevin Gordon
Charles Schwab • 7K followers
Latest from Liz Ann Sonders and me is now live: It's an economy that looks less like a cliff and more like a long, slightly uphill treadmill—hard work, limited scenery, and the occasional splash of energy drink from better services data. When official releases return—or better said, when official releases have "clean" data once again—it will not surprise us if they broadly confirm this "slowish expansion with softer labor demand and sticky services prices" script. Until then, we'll keep following the private breadcrumbs—and yes, we brought extra water. https://lnkd.in/eR-zeaMp
29
-
Alan Milligan
Black Oak Data Advisory • 8K followers
Why Boards are refocusing on the data operating model in 2026 ... The thing is, Boards are no longer debating platforms. They are debating why outcomes still aren’t landing. In my opinion, these are the questions Boards will be asking in 2026 ... 🔸Accountability “Who actually owns data and AI outcomes?” Ownership is fragmented across tech, risk and business. Committees proliferate, but no one is personally accountable. Boards will push toward a single senior owner with clear decision rights across value, risk and prioritisation. 🔸Value visibility “Why can’t we see value quarter by quarter?” Use cases are approved, but value is not embedded in the management cadence. Boards want portfolio-level value governance wired into quarterly and monthly rhythms, not retrospective reporting. 🔸Scale “Why so many pilots, but scaling fails?” Data is treated as a rollout problem rather than an operating-model redesign. Boards are forcing redesign of the workflows data touches, not just deployment of models. 🔸Defensibility “Is this safe, controlled and explainable?” Governance and controls are often bolted on after deployment. Boards expect human-in-the-loop, oversight and data controls to be designed into day-one operations. 🔸Cost discipline “Why are costs still rising?” Legacy platforms and duplicated data work are rarely stopped. Boards are or will be demanding explicit decommissioning authority and simplification as part of the operating model. ... In 2026, boards will back leaders who can explain the business operating model, not the tech architecture. .
14
7 Comments
Explore top content on LinkedIn
Find curated posts and insights for relevant topics all in one place.
View top contentOthers named Robert Edelman
-
robert edelman
Solana Beach, CA -
Robert Edelman
Bronx, NY -
Robert Edelman
Baltimore, MD -
Robert Edelman
New York City Metropolitan Area
55 others named Robert Edelman are on LinkedIn
See others named Robert Edelman