For Hackers who like answering questions from other Hackers.
A lot of questions on Hacker News go unanswered. There's no easy way to find unanswered questions on the HN site, so I built AnswerHN to help people discover them.
If you enjoy this project, or find it useful, consider buying me a coffee.
Posted at 2025-01-30 15:44 by karlbaumg | 0 comments
Here is my quick elevator pitch: Android Emulator requires virtualization, GPU support and consumes quite a bit of CPU & memory to run well. While these are acceptable for developer laptops, they're hard to provide in restricted environments such as Github Actions, inside of another VM or generic servers where you don't have GPU.
Limbar runs the emulator for you in the cloud and lets you control remotely as well as provides adb tunnel so that all tools in Android ecosystem can utilize it as a device. It's a true cloud; as opposed to competitors, we don't charge for concurrency so you can scale as much as you need and not worry about unused capacity.
I guess I'd need a skyscraper elevator for this pitch but bear with me please, thanks!
Here is the website https://limbar.io
(Technically, we're running our own distribution of Android without relying on qemu, hence very low usage on idle and dynamic cpu & memory dedication allowing us to provide concurrency at a low cost)
I recently started independent contracting work as a developer. What tools do you use to organize your work? Specifically, creating and signing contracts, managing invoices and payment, etc.
There are tools like Zoho One that do “everything”, and more individual tools like Wave, and Docusign. What’s worth getting?
Posted at 2025-01-30 14:38 by soneca | 0 comments
I think with the reasoning models, LLMs can be more useful at coding than just smarter autocomplete and search. I have not tried any yet though.
What is the best process or tool to use AI to help me code in a specific somewhat large codebase?
Probably better to be model agnostic. The more integrated in my coding setup the better (I use VSCode, mostly Typescript, work in basic Saas stuff, nothing advanced, if it matters).
Posted at 2025-01-30 10:59 by asim | 0 comments
Hey all
The tldr. I spent 10 years working on an open source project called Micro. In that time I purchased the domain micro.dev. Rather than letting it expire I thought I'd see if there's anyone here who wants it? Or has some sort of relevant use for it.
I am curious to know what startup programs you're benefiting from.
I’ve personally claimed credits and perks from Azure, OpenAI, AWS Activate, Supabase and Notion.
Recently, I came across the Notion for Startups program, which offers a $100 Visa gift card and three months of the Notion Plus Plan.
Are there any other good startup programs offering useful benefits? Would love to hear what has worked for you!
Posted at 2025-01-30 03:56 by sujayk_33 | 0 comments
Just wonderin
Posted at 2025-01-30 00:35 by arthurcolle | 0 comments
http://www.pennelynn.com/Documents/CUJ/HTML/94HTML/19940045.HTM
I was reviewing byte pair encoding literature and was curious
I'd like some of my online tasks to be automated. What tools are available today to build a custom agent for a website?
Posted at 2025-01-29 22:40 by keyurishah | 0 comments
Hi everyone,
As the founder of a early stage startup, I’m on a mission to simplify data movement and data quality by building a product that truly meets industry needs. But to create the right product-market fit, I need your help!
If you’re working with data pipelines, ETL/ELT, data quality, or lakehouse technologies, I’d love to hear about the challenges and pain points you face with existing tools.
I’m conducting 45-minute product fit interviews to understand what’s working, what’s not, and what features would make your life easier. If you’re open to sharing your experiences (or know someone who would be), let’s connect!
Your insights will directly influence how we build a flexible, scalable, and user-friendly solution for the modern data stack.
Drop a comment below if you’re interested. Looking forward to the conversation!
Using just scratch.
Posted at 2025-01-29 22:33 by ZevsVultAveHera | 0 comments
Why we don't see them used more widely outside of extension cards like graphic cards? Are they much more expensive? Are they slower/have higher latency? Do they consume much more electricity? Or they simply are not the answer to at least part of problems created by multi CPU computing, so no one uses them as "main" RAM. Or there are some other subtle reasons? Do they create new problems for operating system design when used for main computer memory?
Posted at 2025-01-29 22:32 by resters | 0 comments
I have read over many of the discussions on this site but none of them apply directly to my experiences. My overall problem is that sometimes certain devices which usually operate reliably completely refuse to pair. In all cases trying to forget the device, rebooting the devices in question, shut down, just don't work. 1. I have hearing aids with a BT transmitter for listening to TV. Or rather I have 2 transmitters but only one is flakey. From time to time the 2nd device refuses to pair: the other transmitter is powered down; I reconnect, I reboot, I forget the devices, finally I give up haha. And the next day, unfailingly, everything works as it should. 2. Same thing with my heart rate pickup from a chest belt. 3. Very occasionally I can't reconnect my cell to my car - try all the usual as mentioned above. Connection finally happens some hours later. So all these circumstances happen with equipment used daily, and in respectively unchanging circumstances.
Posted at 2025-01-29 21:01 by npollock | 0 comments
Is it just obtaining a distribution of the next token predictions, or is it more complex?
I came across this linkedin post[0] from: Sr Director, Distinguished Engineer, CTO Office @ Google
The post itself was discussed on HN already: https://news.ycombinator.com/item?id=42852866
> this is my "aha" moment in AI. An AI model that improves other models and itself! with significant proven gains for inference,. Deepseek analyzed the LlaMA code and realized that the KV cache was not optimal so the reasoning engine suggested a way to optimize it. Technically it was probably inspired by Mistral OSS implementation where KV cache uses dynamical allocation. this technique is well know but LlaMA missed it. So, deepseek suggested a code rewrite which was 99% correct with humans only needed to validate and test the code. This improvement results in 2.8x gain for inference on OSS LlaMA. What is even more impressive is that the same change can be adopted to improve DeepSeek itself with initial confirmed gains more that 3x, Effectively a model that is self improving because able to apply previous knowledge when appropriate, augmenting humans and connecting the dots. that's quite something to me. No hype, just pure technological paradigm shift.
Pretty much every sentence has a factual error, I gave DeepSeek the context, the post itself and asked it to fact check:
> The LinkedIn post contains several inaccuracies and exaggerations when compared to the actual news and code provided:
KV Cache Optimization Claim: The post mentions KV cache optimization as the main improvement, but the provided code and news specifically discuss SIMD optimizations for quantization functions (q4_K, q6_K, etc.) in llama.cpp. There's no mention of KV cache changes in the technical details - this appears to be a conflated claim.
Performance Gains: While the PR shows 2x speed improvements for WASM through SIMD optimizations, the post inflates this to:
"2.8x gain for inference on OSS LLaMA"
"3x gains for DeepSeek itself"
These specific numbers aren't substantiated in the provided materials.
Self-Improvement Claim: The post states the model can improve itself, but the evidence shows:
Humans wrote detailed prompts (multiple iterations needed)
Humans validated and tested all code
The model only converted existing ARM NEON code to WASM SIMD
Mistral Inspiration: The post references Mistral's KV cache implementation, but the actual work:
Uses Mistral as a reference for model registration patterns
Focuses on quantization optimizations rather than attention mechanisms
Code Generation Accuracy: While the post claims "99% correct code", the technical details show:
Multiple prompt iterations needed (4-8 retries for complex cases)
Human debugging required for q6_K implementation
Final code still required human validation
Key Verdict: The LinkedIn post significantly exaggerates and misrepresents the actual technical work. The real achievement was using DeepSeek-R1 to help convert ARM NEON code to WASM SIMD through iterative prompting, resulting in 2x speed improvements for specific quantization functions - not a fundamental architectural improvement to LLaMA or true self-improvement capability.The post mixes different technical concepts (KV cache optimization vs. SIMD quantization) and inflates the AI's role in the process. While impressive, the actual work is more narrow and human-guided than portrayed.
[0] https://www.linkedin.com/posts/searchguy_this-is-my-aha-moment-in-ai-an-ai-model-activity-7290244226766823425-OUQk?utm_source=share&utm_medium=member_desktop
Posted at 2025-01-29 18:17 by mirawelner | 0 comments
I have:
Ludicity (https://ludic.mataroa.blog) Steno & PL (https://blog.waleedkhan.name) Mijndert Stujj (https://mijndertstuij.nl) Ben Congdon blog (https://benjamincongdon.me/blog/) ruder. io (ruder.io) Stratechery (https://stratechery.com)
These are basically all ML/AI blogs except Ludicity. Any others I should add?
Hey there, my names Tom Ventura and I'm building a product that I want to integrate with EPICs referall process for psychologists/psychiatrists. However, I need to know more about how that process works in EPIC from a user perspective. I am looking for somebody who will explain to me how the process of referalls works in EPIC and potentially even showing me an empty screen of what you have to fill out if possible, thank you
Posted at 2025-01-29 17:28 by kittycatmoew | 0 comments
i know about malware, but not alot about malware and want to add it to my document. I wrote a document about types of malware that are computer specific but now i'm writing another one, what topics should i add? It can be any topic as long as it's related to malware/ and or software that helps with removal and protection against it. Complex topics will take longer to research though since i still have school!:)
A little of numpy in J
all =: *./
any =: +./
sort =: /:~
shape =: $
ndim =: #@:$
size =: */@:$
linspace =: 3 : 0
'a b n' =. y
h =. (b-a)%(n-1)
a + h * i. n
)
range =: 3 : 0
l =. #y
if. l=1 do. res =. i. y
elseif. l=2 do. res =. ({. y) + i. -~/y
else. 'a b h' =. y
res =. a + h * i. >. (b-a) % h
end.
res
)
sum =: +/
prod =: */
cumsum =: +/\
cumprod =: */\
diff =: 3 : '2 -~/\ y'
lcm_reduce =: *./
gcd_reduce =: +./
lcm =: *.
gcd =: +.
max =: >./
min =: <./
argmaxi =: i. >./
argmini =: i. <./
argmax =: 4 : 0
if. x=_ do. argmaxi ,y
elseif. x=1 do. argmaxi"1 y
else. argmaxi"1 |: y
end.
)
argmin =: 4 : 0
if. x=_ do. argmini ,y
elseif. x=1 do. argmini"1 y
else. argmini"1 |: y
end.
)
cos =: 0&o.
sin =: 1&o.
tan =: 3&o.
arcsin =: 5&o.
arccos =: 6&o.
asin =: arcsin
acos =: arccos
atan =: arctan
hypot =: %:@: +/@: *:
deg2rad =: (%&180) @: (1p1&*)
rad2deg =: (*&180) @: (%&1p1)
radians =: deg2rad
degrees =: rad2deg
asinh =: 7 & o.
acosh =: 8 & o.
atanh =: 9 & o.
round =: 4 : '(<.0.5 + y*10^x)%10^x'
around =: round
rint =: <.@:(+&0.5)
floor =: <.
ceil =: >.
exp =: 3 : '^y'
expm1 =: 3 : '(^y) - 1'
exp2 =: 3 : '2^y'
log =: 3 : '^. y'
log10 =: 3 : '(^. y) % (^. 10)'
log2 =: 3 : '(^. y) % (^. 2)'
log1p =: 3 : '^. 1+y'
add =: +
reciprocal =: %
power =: ^
subtract =: -
true_divide =: %
floor_divide =: <.@:%
float_power =: ^
fmod =: |~
mod =: |~
divmod =: %,|~
remainder =: |~
bitwise_and =: *.
bitwise_or =: +.
binary_rep =: ": @: (2&#.)
unique =: ~.
unique_counts =: ~. ,: #/.~
unique_inverse =: ~. ,: (~. i. ])
unique_values =: ~.
unique_all =: ~. ; (I. @: ~:); (~. i. ]) ; #/.~
intersect1d =: [ -. ([ -. ])
setdiff1d =: -.
setxor1d =: ~. @:(-. , (-.~))
union1d =: ~. @:,
shuffle=: 3 :' (?~ #y) { y'
Posted at 2025-01-29 16:21 by muragekibicho | 0 comments
Slide 1 (Title and one-liner) LeetArxiv, Leetcode for implementing Arxiv papers
Slide 2 (Problem) - Programmers are losing their coding jobs to AI. - Becoming an AI researcher is the best path to career longevity. - Understanding math papers is a barrier to becoming a researcher.
Slide 3 (Solution) We teach programmers how to turn intimidating math research into code. By offering annotated, step-by-step, code implementations.
Slide 4 (Market Potential) - 225,000 software engineers were laid off in 2024. - In the same period, AI job postings increased by 2.0%
Slide 5 (Traction, 2 months after launch)
- 586 subscribers on our Substack - 1.22k ARR - 291 customers on Udemy - 505$ ARR
Slide 6 (Business Model) 1. Currently, we offer Substack-based coding guides. 2. We also offer video-based courses on Udemy. 3. We are building a website to host Leetcode-style programming content.
Slide 7 (Team) Solo programmer, Statistics degree
Slide 8 (Ask) -Looking to raise a 20,000 dollar pre-seed for 6 months of runway.
Funds go towards : - hiring a dedicated web developer to maintain the LeetArxiv website. - Cover Docker, MongoDB and Heroku server hosting costs
Here are the slides : https://x.com/murage_kibicho/status/1884626898417864969