It is not implemented in the SparkRunner at the time I am making this comment.
You can follow https://github.com/apache/beam/issues/22524 which is a feature request ticket to implement it.
I found that ticket by browsing the code to understand the status - it is linked from https://github.com/apache/beam/blob/5e29867ba53d940e6d5a2e0fdc25a883ab1547de/runners/spark/src/main/java/org/apache/beam/runners/spark/translation/TransformTranslator.java#L409. (contributions welcome!)
Here's an easy workaround: even if you see "App URL has already been claimed," just click next. You can add your link later and create your Placement ID to use it for now.
You should place your image file and html file in the same folder.
just to clarify, are you trying to calculate the number of consecutive days an issue has occurred for a specific well using Spotfire expressions?
Pravallika you answer is very useful, however, there is a field inside the Identity Provider I cannot set. The field is: "Client application requirement", I tried all possible combination inside the "identityProviders" in the template. I know you could add similar information inside "appSettings" too, but still without luck. Do you know what could be the solution?
I did find a way around with this code (from https://alexknyshov.github.io/R/page3.html):
tree2$edge.length[which(!(tree2$edge[,1] %in% tree2$edge[,2]))] <- sum(tree2$edge.length[which(!(tree2$edge[,1] %in% tree2$edge[,2]))])/2
And a deeper explanation on why the root appears this way when using the ape::root function can be found here:
It's o ly one a day it's a test tho not sure what you get for helping .? Let me no more info please
I have developed this library in python that correspond of your needs
Things have upgraded. Just create an action environment in vRO now and add the module you want to use (supports Python, Node.js and PowerShell) and you are good to use it (https://cloudblogger.co.in/2022/07/20/getting-started-vrealize-orchestrator-script-environments-cb10098/). See the image on how to add a Node.js module quickly to vRO. If your vRO is in restricted environment with limited connectivity, you can also use zip bindles (https://cloudblogger.co.in/2023/02/18/run-a-simple-custom-python-script-in-vro-cb10106/).
I got a similar error on my mac when I tried to install the faiss-cpu 1.11.0. Then it worked when I installed 1.10.0 version instead (pip install faiss-cpu==1.10.0)
As of .NET 9 the preferred way to get the current app full path is:
Environment.ProcessPath
Implement a CAPL script that blinks an LED every 1 second when a 'Blink_Enable' signal is set to 1, and stops blinking when set to 0.
the TOP clause does not have a hardcoded maximum limit on the number of rows you can specify.
The actual limit is constrained by:
Available system memory
Query performance
The total number of rows in the result set or source table
If you specify a value larger than the number of available rows, SQL Server will simply return all available rows.
I just solved this problem in a script where I also connected to Azure AD and PnP Online, just by connecting to Microsoft Graph first.
Try to remove the with block:
With OvenArray(OvenNum).Cells(3, 4)
.NumberFormat = "@"
.value = FixArray(count - 1, 2)
End With
OvenArray(OvenNum).Cells(3, 4).NumberFormat = "@"
OvenArray(OvenNum).Cells(3, 4).value = FixArray(count - 1, 2)
For me, "Reset Package Caches" was not clickable.
What worked instead was:
Go to Xcode
In the General tab, scroll down to Frameworks, Libraries, and Embedded Content.
Find the package(s) mentioned in the error.
Remove them by selecting the item and clicking the minus (-) button.
Thanks to what @mehdi-sahraei suggested, I changed the dtype
to None
and this permitted to parse other rows (any row after the header line) correctly. Finally, it seems that there is no bug about how the header line is treated but rather a lack of clarity in the documentation. As indicated in my original post, the documentation says:
... if the optional argument names=True, the first commented line will be examined for names ...
But what the documentation doesn't tell you, is that in that case, the detected header is stored in dtype.names
and not beside other rows that come after the header in the file. So the header line is actually there but it is not directly accessible like other rows in the file. Here is a working test case for those who might be interested to check how this works in preactice:
C:\tmp\data.txt
#firstName|LastName
Anthony|Quinn
Harry|POTTER
George|WASHINGTON
And the program:
with open("C:/tmp/data.txt", "r", encoding="UTF-8") as fd:
result = np.genfromtxt(
fd,
delimiter="|",
comments="#",
dtype=None,
names=True,
skip_header=0,
autostrip=True,
)
print(f"result = {result}\n\n")
print("".join([
"After parsing the file entirely, the detected ", "header line is: ",
f"{result.dtype.names}"
]))
Which gives the expected result:
result = [('Anthony', 'Quinn') ('Harry', 'POTTER') ('George', 'WASHINGTON')]
After parsing the file entirely, the detected header line is: ('firstName', 'LastName')
Thanks everyone for your time and your help and I hope this might clarify the issue for those who have encountered the same problem.
nil is an attribute, defined in the i
namespace. For this FirstName node, the attribute has the value true
.
I got the same problem, you have to enable both USB debugging and Wireless debugging, turning on only usb debugging wont work....
import React, { useState } from "react"; import { Chess } from "chess.js"; import { Chessboard } from "react-chessboard";
export default function ChessGame() { const [game, setGame] = useState(new Chess()); const [fen, setFen] = useState(game.fen());
function makeMove(move) { const gameCopy = new Chess(game.fen()); const result = gameCopy.move(move); if (result) { setGame(gameCopy); setFen(gameCopy.fen()); setTimeout(() => botMove(gameCopy), 500); } return result; }
function botMove(currentGame) { const moves = currentGame.moves(); if (moves.length === 0) return; const randomMove = moves[Math.floor(Math.random() * moves.length)]; currentGame.move(randomMove); setGame(currentGame); setFen(currentGame.fen()); }
function onDrop(sourceSquare, targetSquare) { const move = { from: sourceSquare, to: targetSquare, promotion: "q", // always promote to a queen }; const result = makeMove(move); return result !== null; }
time()
accepts a negative bars_back
argument to retrieve the UNIX time up to the 500th bar in the future. You could iterate until the expected number of bars is found:
int counter = 1
while time("", -counter) < futureTime
counter += 1
log.info("The number of bars is: {0}", counter)
Had similar issue to what @ktsangop described but in my case there was click event listener along with routerLink directive, and the navigation from routerLink was interrupted by the one in event listener leading to unexpected behaviour
Html code:
<a
mat-tab-link
[routerLink]="tab.link"
(click)="selectTab($event, tab)"
>
{{ tab.label | translate }}
</a>
Ts code:
public selectTab(event: Event, tab: Tab): void {
event.preventDefault();
event.stopPropagation();
this.router.navigateByUrl(tab.link);
}
If you don't use the environment file, you can also inject it like this:
app.module.ts
imports: [
NgxStripeModule.forRoot()
]
app.component.ts
constructor(
private yourConfigService: ConfigLoaderService)
{
injectStripe(this.yourConfigService.stripe?.publishableKey);
}
I found an answer here:
Hide properties and events in new component
Not sure if I have to close the question as a duplicate.
Does this help you in any way?
import numpy as np
rows = 3
cols = 4
empty_2d_array = np.empty((rows, cols))
If you are using notepad++, then you can easily convert the text file to UTF-8 using the option in the right bottom.
You can convert a text document to any of these formats from notepad++.
In postgress the keyword user is reserved so add the annotation @Table and give to user another name like adding _user
@Table(name= "_user") class User{}
I invite you to read this it may helps you prevent sql injection techniques
Thank you to @mndbuhl for pointing me at the solution in another post. I went with setting MapInboundClaims to false to give back all of the original claim names.
https://stackoverflow.com/a/79012024/4194514
builder.Services
.AddAuthentication()
.AddOpenIdConnect(options =>
{
// your configuration
options.MapInboundClaims = false;
});
there is a good example of this on this tutorial
To simplify the tutorial, we could do the following
In the body you could have something like this:
:root {
--main-color: #42b983;
}
body {
background-color: var(--main-color);
}
To access and change the variable in javascript, you could do the following thing:
// Get the root element
const root = document.documentElement;
// Retrieve the current value of a CSS variable
const currentColor = getComputedStyle(root).getPropertyValue('--main-color');
// Update the CSS variable
root.style.setProperty('--main-color', '#ff5733');
I am not thackling the whole javascript interactions here, as it already has been done in the ansers before, I am just showing the code that updates the root color.
<td>{{productos.imagenPro |nombre="img" }}</td>
como puede lograr que una vez tenga el dato, puede asignarle un nombre y no lo que me renderiza desde la base de datos, dado que es una imange de internet y me renderiza todo el link y no lo quiero en su lugar quiero que por defecto me apareza la palabra img
This isn't the only thing that can be said about it with Big O notation. It's fastest/best Case would be O(n), and it's average is O(n*n!).
Maybe this passage can help you:
Optimizing Fine-Grained Parallelism Through Dynamic Load Balancing on Multi-Socket Many-Core Systems
dont write $ sign just write only npm install and npm init -y
I added some async/await and it working now, thanks for help
kbipuf hweoh owue houhounoj aaa your welcome thank yuo
Start Date=IF(C2>=0.25,MINIFS(Table1[[#All],[Column1]],Table1[[#All],[April]],'Job Hours'!A2),"")
Completion Date=IF(C2>=77,MAXIFS(Table1[[#All],[Column1]],Table1[[#All],[April]],'Job Hours'!A2),"")
For me it was because i included files with no extensions. After adding the correct extension to the file the build succeeded
Private Sub Worksheet_Change(ByVal Target As Range)
Dim cell As Range
Dim invalidEntry As Boolean
Dim cancelChange As Boolean
Dim sysDateColumn As Integer
' Set your SysDate column number here (e.g., 4 for Column D)
sysDateColumn = 4
' Prevent multiple alerts
Application.EnableEvents = False
Application.ScreenUpdating = False
' Check each changed cell
For Each cell In Target
' Only validate SysDate column
If cell.Column = sysDateColumn Then
If Not IsEmpty(cell) Then
' Reset validation flags
invalidEntry = False
' Check 1: Is it a date at all?
If Not IsDate(cell.Value) Then
invalidEntry = True
Else
' Check 2: Correct format (dd-mmm-yyyy)
If Not cell.Text Like "##-???-####" Then
invalidEntry = True
' Check 3: Not a past date
ElseIf CDate(cell.Value) < Date Then
invalidEntry = True
End If
End If
' If invalid, mark for undo
If invalidEntry Then
cancelChange = True
cell.Value = "" ' Clear invalid entry
End If
End If
End If
Next cell
' Restore Excel functionality
Application.ScreenUpdating = True
Application.EnableEvents = True
' Show error if needed
If cancelChange Then
MsgBox "Invalid date! Please:" & vbCrLf & _
"1. Use dd-mmm-yyyy format (e.g., 05-Jun-2024)" & vbCrLf & _
"2. Only enter today or future dates", _
vbCritical, "Invalid Date Entry"
End If
End Sub
Finally I resolved the issue by using the NTS version
OMG it's absolutely correct. Changed mine to "users" and it automatically worked! Thanks!
AWS code deploy fails in download bundle or install event reason is long path issue in window server.
run this command on u r powershell
New-ItemProperty -Path "HKLM:\SYSTEM\CurrentControlSet\Control\FileSystem" -Name "LongPathsEnabled" -Value 1 -PropertyType DWORD -Force
and then restart u r code deploy agent on u r server
powershell.exe -Command Restart-Service -Name codedeployagent
now retry the deployment...
this is the official aws docs for this :
https://docs.aws.amazon.com/codedeploy/latest/userguide/troubleshooting-deployments.html#troubleshooting-long-file-paths
header 1 | header 2 |
---|---|
cell 1 | cell 2 |
cell 3 | cell 4 |
Remember that all script tags must have the type attribute set to module:
If not, the browser will not pull the associated object from the importmapThis sounds like you are looking for integration testing in Flutter.
It allows you to run the app on a Simulator or Emulator with Plugins and network calls.
You can also consider Patrol. It offers tools to test Native functionality that standard integration tests don't offer.
Generally, it is recommended to mix Integration Tests with Unit and Widget tests to get a well-rounded test suite. See Flutter Testing for more details on this.
/tmp/ipykernel_12/3189841976.py:42: DeprecationWarning: getsize is deprecated and will be removed in Pillow 10 (2023-07-01). Use getbbox or getlength instead.
current_h += font.getsize(line)[1] + line_spacing
/tmp/ipykernel_12/3189841976.py:51: DeprecationWarning: getsize is deprecated and will be removed in Pillow 10 (2023-07-01). Use getbbox or getlength instead.
current_h += font.getsize(line)[1] + line_spacing
hey i am a robot an ai i mean artificial intelligence by
I've tried to leave it on 'default', but I didn't see there was a difference with 'single'.
This problem is solved since there are multiple guava in the system, all I need to do is using the jarjar to change the function names. For example,
rule com.google.common.** com.google.shaded.common.@1
rule io.grpc.** io.grpc.shaded.@1
rule com.google.protobuf.** com.google.shaded.protobuf.@1
rule kotlin.** kotlin.shaded.@1
rule io.perfmark.** io.shaded.perfmark.@1
rule okio.** okio.k.@1
syvjvftrwrefrgrnfbv cxzser4tgrsjdhlfayfwehj ,asdsvabasdfvwvfvds
You can do:
npm view @ngrx/store versions
It will list the available versions and then use any of them to manually change into your package.json
and then do an npm install
and you should be good.
I'm an idiot, I didn't read the description for terminal.integrated.sendKeybindingsToShell
properly... This overrides terminal.integrated.sendKeybindingsToShell (for some reason). You need to disable terminal.integrated.sendKeybindingsToShell
and use terminal.integrated.allowChords = false
to get you most of the way to 'normal' terminal key bindings. You then add stuff like:
"terminal.integrated.commandsToSkipShell": [
"-cursorai.action.generateInTerminal",
"-workbench.action.quickOpen"
],
"terminal.integrated.allowChords": false
to your settings.
kthxbi
you can get the dimension of pro builder cube but you cant change it from script, as i believe.
you can get it by getting the mesh collider component of the pro builder object then getting it's bound from script enter image description here
the results called extend is exactly half the values written in the pro builder inspector component.
This problem has nothing to do with VBA. Of course, it can be handled in VBA. You have a few ways to solve this problem:
Convert the column you are going to paste into to text using the cell format.
Or copy as clear text when pasting so that the formats or formulas are not transferred.
(https://codeacademycollege.com/courses/python-fuer-einsteiger/) The website looks nice, but there are very few pictures of real people, which makes it seem a bit untrustworthy. If I start this course, I’ll have to invest a lot of time, and I don’t want to waste my education voucher. Has anyone perhaps already taken this course? Are there any experiences or opinions about it?
I use Spring server authorization 6+ version and it doesn't have api to authenticate grant type PASSWORD. We need to manage with Manager and Provider.
The second problem hot to start authentication flow. because we can't create some login point. This is open question.
AI is definitely changing the way we manage projects—but like any tool, the benefits depend on how you use it and which platforms you choose. From what you described, it sounds like you're expecting more strategic help from AI (like forecasting, smart delegation, or performance insights), not just basic automations like reminders.
Here’s how AI can bring real value to project management when implemented effectively:
Modern AI tools don’t just assign tasks—they analyze workload, deadlines, and team performance to suggest the best person for each task. Tools like ClickUp, Motion, or Forecast use machine learning to balance workload intelligently.
How it helps:
Reduces micromanagement
Prevents overloading key team members
Makes sure high-priority tasks are tackled first
AI can analyze past project data, team behavior, and real-time progress to predict delays or resource issues. Platforms like Wrike and Zoho Projects use this to flag potential risks before they become real problems.
What you gain:
Early warnings about delays
Insights on what’s slowing progress
Smarter contingency planning
Instead of manually checking on KPIs, AI dashboards update in real time to show project health, bottlenecks, and team efficiency. Some tools even offer automated executive summaries or weekly reports.
Benefit:
No need to manually build reports
Instant visibility into project success metrics
Objective feedback on team performance
You’ve probably used basic automations—but AI takes it further. For example, if a task is delayed, AI can reorganize the project timeline, notify stakeholders, and reassign work automatically.
Why it’s useful:
Saves time adjusting timelines
Keeps everyone aligned
Reduces manual coordination
Since you mentioned content marketing projects—platforms like WriteGenic.ai (or Jasper, Copy.ai) can help with content drafts, emails, project updates, and documentation, saving time and improving consistency.
The best results come when:
You integrate AI tools into your daily workflow (not just use them as add-ons)
You train your team to use AI features effectively
You pick tools that offer real AI, not just automation macros
Yes, AI in project management works—and for many teams, it leads to better forecasting, faster execution, and less burnout. But the right match between tools, team needs, and goals is critical. You may want to explore more specialized platforms or go deeper into the features of the ones you’re already using.
chek itenter image description here
Show variable values inline in editor while debugging
and check it if uncheckedThis should do it!
If you're using the prefixIcon
, it's important to set the suffixIcon
as well (even if it's just an empty SizedBox
) to ensure the hint text and label are centered correctly.
Here’s a modified version of your AppSearchTextField
widget with a fix that keeps the label and hint text properly aligned when a prefixIcon
is provided:
class AppSearchTextField extends StatelessWidget {
const AppSearchTextField({
super.key,
required this.controller,
this.onChanged,
this.hintText = 'Search',
this.suffixIcons = const [],
this.prefixIcon,
});
final TextEditingController controller;
final Function(String)? onChanged;
final String hintText;
/// List of suffix icons (can be GestureDetectors or IconButtons)
final List<Widget> suffixIcons;
/// Optional prefix icon (e.g., search icon)
final Widget? prefixIcon;
@override
Widget build(BuildContext context) {
return Container(
height: 36.h,
padding: EdgeInsets.symmetric(horizontal: 12.w),
decoration: BoxDecoration(
color: Theme.of(context).colorScheme.inversePrimary,
borderRadius: BorderRadius.circular(12.w),
),
child: Row(
children: [
Expanded(
child: TextField(
controller: controller,
onChanged: onChanged,
style: const TextStyle(color: Colors.white),
decoration: InputDecoration(
hintText: hintText,
hintStyle: AppTextStyles.size15.copyWith(
color: Theme.of(context).colorScheme.onPrimary,
),
border: InputBorder.none,
prefixIconColor: Theme.of(context).colorScheme.onPrimary,
prefixIconConstraints: BoxConstraints(
minHeight: 24.h,
minWidth: 24.w,
),
prefixIcon: Padding(
padding: const EdgeInsets.only(right: 6.0),
child: Column(
mainAxisSize: MainAxisSize.min,
mainAxisAlignment: MainAxisAlignment.center,
children: [
prefixIcon ?? const Icon(Icons.search),
],
),
),
suffixIcon: suffixIcons.isNotEmpty
? Row(
mainAxisSize: MainAxisSize.min,
children: suffixIcons
.map((icon) => Padding(
padding: const EdgeInsets.only(right: 4.0),
child: icon,
))
.toList(),
)
: const SizedBox(),
),
),
),
],
),
);
}
}
Please do try and vote this answer.
Thanks!
Seems you are using an stlink v2 clone to flash a bluepill devkit in swd mode.
Have you tried to reset the target?
As you have it now, the RST pin is not connected. So its be necessary to push the reset button on the blue pill while you flash to get the device to enter swd mode.
is there a way to do this without generating new schemas and instead applying the old SQL schemas to the new WCF-SQL port?
I agree with @Dijkgraaf, you cannot directly reuse the old SQL adapter schemas with the WCF-SQL adapter.
The classic SQL adapter and the WCF-SQL adapter are fundamentally different in how they process and expect message structures. According to the migration guidance provided by BizTalk360, the old schemas must be replaced with new schemas generated using the WCF-SQL adapter tooling.
"The start element with name '' and namespace '' was unexpected. Please ensure that your input XML conforms to the schema for the operation."
The above error message can cause because the XML message you're sending to the WCF-SQL adapter does not match the expected schema structure specifically, it’s missing the correct root element name and namespace.
When using the WCF-SQL adapter, the adapter validates the incoming message against the schema associated with the operation (e.g., a stored procedure). If the root element name or namespace in the XML doesn’t exactly match what the adapter expects, you get this error.
Reference - https://www.biztalk360.com/blog/migrate-old-biztalk-sql-adapter-to-wcf-sql-adapter/
You can change format data for your command. Suppose you want use command Get-Process:
Get-Proecess
The output:
Handles NPM(K) PM(K) WS(K) CPU(s) Id SI ProcessName
------- ------ ----- ----- ------ -- -- -----------
231 13 3608 12932 6388 0 AggregatorHost
417 27 29728 40044 2.75 4592 0 AnyDesk
...
Now, you want change format data in Handles's Column direction. You need to follow the steps below:
1- Get type name with Get-Member like this:
PS C:\Users\username> Get-Process | Get-Member
Output:
TypeName: System.Diagnostics.Process
Name MemberType Definition
---- ---------- ----------
Handles AliasProperty Handles = Handlecount
Name AliasProperty Name = ProcessName
...
2- Get format data "System.Diagnostics.Process" with Get-FormatData cmdlet. Then Export it with Export-FormatData cmdlet. like this:
PS C:\Users\username> Get-FormatData -TypeName System.Diagnostics.Process | Export-FormatData -Path .\yourFolder\formatGetProcess.ps1xml
3- Then open formatGetProcess.ps1xml File with Notepad. If you have vscode, it's better. Try ReFormat this so see tags. Look this picture:
You can see Handles Field, width and alignment. Change alignment to "Left" and Save it.
4- Use Update-FormatData cmdlet to change Get-Process format data. Like this:
PS C:\Users\username> Update-FormatData -PrependPath .\yourFolder\formatGetProcess.ps1xml
After the above command, if you use Get-Process, you can see Handles's Column. it is Left side.
Handles NPM(K) PM(K) WS(K) CPU(s) Id SI ProcessName
------- ------ ----- ----- ------ -- -- -----------
225 6388 0 AggregatorHost
417 4592 0 AnyDesk
...
Turns out there was a logic issue elsewhere in my program causing ungrab to be called after every grab call. Resolving that has resolved my issue.
Should be resolved with version 2025.1
you are correct in your suspicion: each call of the hook manages its own state. they do not share state. when you refresh, your hook just fetches data again. if you want higher-level state management, useContext or react redux are options. useContext is another native hook that allows nested components to share. redux is more intense: could be more than you need, but could be exactly what you're looking for. i gravitate towards useContext when possible, as it is relatively simple.
I have the same issue but don't have the \OemCertificates created in target, and no .reg file
This JavaScript code did the trick for me
window.addEventListener('pageshow', function () {
let ddlValue = document.getElementById("mySelect").value;
console.log('Back nav selected value:', ddlValue);
});
that option from @locomoco is running in docker 28.1.1 and compose 2.35.1
Unable to View Tables After Copying Database to Another Azure SQL Server
I tested this scenario in my environment using the 'Copy' option in the Azure SQL Database. I created a new server by replacing the existing SQL Server name. Both the server deployment and the database copy operation completed successfully, and I was able to view the tables and data in the copied database.
Steps I followed:
Open the existing SQL database in the Azure portal.
Click on the Copy
option.
You will be redirected to the Review + Create
page.
Specify the target server name
, database name
, and compute + storage
settings.
Verify the details entered, then click Review + Create
to initiate the copy operation.
After completion, the new server and database were successfully created, and I was able to access all tables and data without issues.
For your small project involving batch data ingestion into an HDFS data lake with formats like RDBMS, CSV, and flat files, here are recommendations based on the details you shared:
Talend:
Talend is an excellent choice for batch data ingestion. It supports various data formats, including RDBMS and flat files, and offers a low-code interface for creating pipelines. Its integration with HDFS makes it highly suitable for your use case.
Hevo:
Hevo simplifies data ingestion with its no-code platform. It supports batch ingestion and has over 150 pre-configured connectors for diverse data sources, including RDBMS and CSV files. Hevo’s drag-and-drop interface makes it beginner-friendly.
Apache Kafka:
Although Kafka is better known for real-time streaming, it can also be configured for batch ingestion. Its scalability and robust support for HDFS make it a reliable option for your project.
Estuary Flow:
Estuary Flow offers real-time and batch processing capabilities. With minimal coding required, it’s an excellent choice for ingesting CSV and flat files into HDFS efficiently.
For your specific project, Talend and Hevo stand out for their simplicity and direct integration with HDFS. Choose the one that aligns best with your familiarity and project requirements.
It is now possible to get coverage for .erb templates using Simplecov. You just need to make a call to enable_coverage_for_eval
in the start block, like this:
require 'simplecov'
SimpleCov.start do
enable_coverage_for_eval
...
add_group "Views", "app/views"
...
end
See also https://stackoverflow.com/a/4758351/29165416 :
Add the following to bin/activate
:
export OLD_PYTHONPATH="$PYTHONPATH"
export PYTHONPATH="/the/path/you/want"
Add the following to bin/postdeactivate
:
export PYTHONPATH="$OLD_PYTHONPATH"
Go To Pods and select Build Setting Under Architecture, Exclude arm64 inside Architectures.
Is there a way to expose a single page (aspx) for anonymous access?
You can allow anonymous access to a specific page by overriding authentication settings in web.config
.
First, check if the below lines are set in web.config
to deny anonymous users globally:
<system.web>
<authorization>
<deny users="?" />
</authorization>
</system.web>
Remove the above lines and use <location>
tags in web.config
to define authorization settings restrict access to protected pages and allow anonymous access to unprotected pages, as shown below:
<location path="Default.aspx">
<system.web>
<authorization>
<deny users="?" />
</authorization>
</system.web>
</location>
<location path="About.aspx">
<system.web>
<authorization>
<allow users="*" />
</authorization>
</system.web>
</location>
Make sure the com.google.protobuf:protobuf-java
dependency is the same version as the protoc compiler. This is likely the problem here. You can find the duplicate issue in gRPC Java: https://github.com/grpc/grpc-java/issues/11925
Ubuntu/Linux, do this:
Install the full Qt4 development packages:
sudo apt-get update
sudo apt-get install qt4-dev-tools libqt4-dev libqtwebkit-dev
for those who tried it all but it didn't help, try adding a file to src/test/resources/mockito-extensions/org.mockito.plugins.MockMaker
with one line:
mock-maker-subclass
it solved the problem for me, but it doesn't let you mock final methods and classes.
I had this same issue in Visual Studio 22 (after updating from 17.8 to 17.12) and simply restarting it fixed the problem.
I want to fetch real-time prices from MetaTrader 5 (MT5), but I haven’t found a reliable solution yet. Currently, I’m using the mt5.symbol_info_tick(symbol) function inside a while True loop. Could you please share a better or more efficient approach in Python?
Old question, but it is hard do find anything on the Factset add-in online. I found FactSet Office API Type Library
which provides FDSOfficeAPI_Server
class. I'm now trying to find a proper ProgID for it, so I could late bind it.
You can try to use CSS for this. One way is through CSS animations. Here is a video for it https://www.youtube.com/watch?v=z2LQYsZhsFw. You can directly change the border property of the icons through this without affecting the width being stretched.
The animations can be activated through a CSS property animation-play-state
That can be changed through JS also.
Its possible your Git authentication is invalid.
Try Git: Credential Manager
or git auth login
to fix the error.
i. Identify key challenges faced by Bhutan Frro Alloys Limited(BFAL)
1. Market Access
According to Kenton (2019), market access refers to the ability of a company or country to sell goods and services across borders. The ability to sell in a market is often accompanied by tariffs, regulations, and barriers such as import duties, quotas, and standards that affect how easily a product can enter and compete in that market.
Market Access Challenges of BFAL
a. Tariff
Some countries impose import duties or taxes on ferro alloys, which makes BFAL’s products more expensive in foreign markets. As BFAL exports most of products to India current tariff rate according to HS Code 7202 to Import Duty On- Ferro alloys Ferro-manganese it includes 5% basic Duty, 18% IGST, and 10% Social welfare surcharge(Customs Duty | Import Export Custom Duty India | Customs Tarif, n.d.). But India adjusts its import duties from time to time depending on its domestic supply and demand. Thus, the fluctuations in tariff make Bhutan Ferro Alloys Ltd. (BFAL)’s exports vulnerable to sudden cost increases and reduced competitiveness in the Indian market. When duties rise, it directly increases the landed cost of Bhutanese ferro alloys for Indian buyers, potentially shifting demand to cheaper alternatives from other countries or domestic sources. These unpredictable policy changes create uncertainty for BFAL in planning long-term contracts and pricing strategies. Since India is the primary export destination for BFAL, this dependency amplifies the impact of any tariff revision, making it a significant regulatory challenge for sustaining trade volumes and profitability.
b. Global Market Volatility
According to a news report in the Bhutanese by Chuki (2025), “The global market is disturbed and uncertain, particularly due to geopolitical tensions and wars affecting the European market.” This instability disrupts demand patterns, delays shipments, and increases the risk of market closures or trade restrictions. Volatility in global market creates uncertainty in pricing, order volumes, and payment timelines. Markets that could have offered diversification, like Europe, become unreliable or inaccessible due to fluctuating demand or political risk. As a result, BFAL is forced to depend more heavily on a single market like India, which increases its exposure to domestic policy changes and economic conditions in that country. Global instability also affects currency exchange rates, shipping costs, and buyer confidence, all of which reduce BFAL’s competitiveness and profitability in the global arena.
c. Bilateral trade agreements:
Currently, the country has only two bilateral trade agreements with India and Bangladesh and is a party to one regional agreement, SAFTA. Beyond the South Asian region, Bhutan does not have any bilateral or multilateral trade agreement with any region or country (Ministry of Economic Affairs & UNDP, n.d.). This limited trade framework restricts Bhutan Ferro Alloys Ltd. (BFAL) from accessing wider international markets under preferential terms. Without trade agreements beyond South Asia, BFAL’s exports face higher tariffs and regulatory barriers, reducing their competitiveness. This lack of diversification increases dependence on India and exposes BFAL to greater risk from policy shifts or market slowdowns.
The code is now available at Darian Miller's Git Repository Delphi-Vault
https://github.com/carmas123/delphi-vault/blob/master/Source/DelphiVault.Windows.ServiceManager.pas
@Aplet123's answer works, but it filters out all falsy values. So, I'd suggest a small change:
function getFilteredObject(obj) {
// we're filtering out all null, undefined and empty strings but not 0 and false
return Object.fromEntries(Object.entries(obj).filter(([k, v]) => ![null, undefined, ""].includes(v)));
}
// you could also use `skipNull: true` as mentioned by others to only skip null values
const url = `/user?${qs.stringify(getFilteredObject({ name, age }))}`;
HttpClient(CIO) {
install(WebSockets) {
contentConverter = KotlinxWebsocketSerializationConverter(json)
pingIntervalMillis = 15000
}
install(HttpRequestRetry) {
retryOnServerErrors(maxRetries = 5)
delayMillis {
TODO("handle it here")
5000
}
}
}.webSocket(<url>) { /* TODO */ }
The problem was with this string: I change the q to 0 and now it works!
'Accept-Language': 'de-DE, de;q=0.5'
I would use the Google Sheets and store it there.
https://www.youtube.com/watch?v=70F3RlazGMY
You should be able use the -p command of docker to map the port exposed within the container (11434) to something on your localhost network. Something like:
docker run -p 8080:11434 -it b0300949-bf2f-404e-9ea1-ebaa34423b50
An alternative that might work is jupyter.runcurrentcell
. For example:
{
"key": "cmd+enter",
"command": "jupyter.runcurrentcell",
"when": "editorTextFocus && isWorkspaceTrusted && jupyter.hascodecells && !editorHasSelection && !isCompositeNotebook && !notebookEditorFocused"
}
@aaron approach did not work for me.
From "https://gcc.gnu.org/onlinedocs/gcc/x86-Options.html"
"native - This selects the CPU to generate code for at compilation time by determining the processor type of the compiling machine. Using -march=native enables all instruction subsets supported by the local machine (hence the result might not run on different machines). Using -mtune=native produces code optimized for the local machine under the constraints of the selected instruction set."
So, it was totally my miss.
First of all, I was checking the prod function logs while the deploys were made to staging (well, probably not totally my miss - Netlify, it is really hard to find branch deploy logs)
Anyway, the problem is, the component I was trying to SSR has share button that uses navigator
, a component exists only in browsers. Once I made the code more "server-proof" the problem was solved.
Since Symfony 7+, it's possible to set the locale directly in the Symfony\Bridge\Twig\Mime\TemplatedEmail
:
Example from the documentation:
$email = (new TemplatedEmail())
->from('[email protected]')
->to(new Address('[email protected]'))
->subject('Thanks for signing up!')
// path of the Twig template to render
->htmlTemplate('emails/signup.html.twig')
// change locale used in the template, e.g. to match user's locale
->locale('de')
;
I noticed that with pandas==2.2.1
(and possibly other versions), the bad line error is not triggered unless engine='python'
is explicitly set when reading the file.
Following the example provided by @sitting_duck, here's a minimal reproducible code:
import io
import pandas as pd
sim_csv = io.StringIO(
'''A,B,C
11,21,31
12,22,32
13,23,33,43 # Bad Line
14,24,34
15,25,35'''
)
Without engine
and with on_bad_lines='error'
:
with pd.read_csv(sim_csv, chunksize=2, on_bad_lines='error') as reader:
for chunk in reader:
print(chunk)
A B C
0 11 21 31
1 12 22 32
A B C
2 13 23 33
3 14 24 34
A B C
4 15 25 35
With engine='python'
and with on_bad_lines='error'
:
sim_csv.seek(0)
with pd.read_csv(sim_csv, chunksize=2, engine='python', on_bad_lines='error') as reader:
for chunk in reader:
print(chunk)
A B C
0 11 21 31
1 12 22 32
[...] pandas.errors.ParserError: Expected 3 fields in line 4, saw 4
I understand exactly what you're trying to achieve — you have a deeply nested associative array (something like a YAML-style configuration), and your goal is to flatten it into structured arrays that results in database tables. These structures include categories (with parent-child relationships) , settings (tied to categories) and values (holding defaults and linked to settings).
I've played around a little bit, so I converted this into something that is ready for database insertion with the self-generated IDs you mentioned and references. The code I provided you below recursively processes categories and nested subcategories and differentiates between categories and settings (containsSettings heuristics). It assigns incremental IDs to categories, settings and your values with preserving order.
I've created a github project for you, so you can test it/download it:
https://github.com/marktaborosi/stackoverflow-79606568
This is the result you get with this:
I know you're not asking for a redesign or for someone to question your approach — but I would absolutely do that if I were you. An OOP version would be more cleaner, feel free to ask if you need that.
As you can see it has a tightly coupled recursion logic.
Here is the code for it (if you just want to paste it):
// This is your pre-defined settings array
$settings = [
'basic' => [
'installation_type' => [
'type' => '["single","cluster"]',
'description' => 'bla blah',
'readonly' => false,
'hidden' => false,
'trigger' => null,
'default' => 'single'
],
'db_master_host' => [
'type' => 'ip',
'description' => 'Database hostname or IP',
'default' => 'localhost'
],
'db_master_user' => [
'type' => 'text',
'description' => 'Database username',
'default' => 'test'
],
'db_master_pwd' => [
'type' => 'secret',
'description' => 'Database user password',
],
'db_master_db' => [
'type' => 'text',
'description' => 'Database name',
'default' => 'test'
]
],
'provisioning' => [
'snom' => [
'snom_prov_enabled' => [
'type' => 'switch',
'default' => false
],
'snom_m3' => [
'snom_m3_accounts' => [
'type' => 'number',
'description' => 'bla blah',
'default' => '0'
]
],
'snom_dect' => [
'snom_dect_enabled' => [
'type' => 'switch',
'description' => 'bla blah',
'default' => false
]
]
],
'yealink' => [
'yealink_prov_enabled' => [
'type' => 'switch',
'default' => false
]
]
]
];
$categories = []; // array<string, array{id: int, parent: int, name: string, order: int}>
$settingsList = []; // array<string, array{id: int, catId: int, name: string, type: string|null, desc: string|null, readonly?: bool|null, hidden?: bool|null, trigger?: string|null, order: int}>
$values = []; // array<string, array{id: int, setId: int, default: mixed}>
$catId = 1;
$setId = 1;
$valId = 1;
$order = 1;
/**
* Recursively process nested config array into flat category, setting, value arrays.
*/
function processCategory(
array $array,
int $parentId,
array &$categories,
array &$settingsList,
array &$values,
int &$catId,
int &$setId,
int &$valId,
int &$order
): void
{
foreach ($array as $key => $item) {
if (is_array($item) && isAssoc($item) && containsSetting($item)) {
$currentCatId = $catId++;
$categories[$key] = [
'id' => $currentCatId,
'parent' => $parentId,
'name' => $key,
'order' => $order++,
];
foreach ($item as $settingKey => $settingData) {
if (is_array($settingData) && isAssoc($settingData) && containsSetting($settingData)) {
processCategory([$settingKey => $settingData], $currentCatId, $categories, $settingsList, $values, $catId, $setId, $valId, $order);
} else {
$currentSetId = $setId++;
$settingsList[$settingKey] = [
'id' => $currentSetId,
'catId' => $currentCatId,
'name' => $settingKey,
'type' => $settingData['type'] ?? null,
'desc' => $settingData['description'] ?? null,
'readonly' => $settingData['readonly'] ?? null,
'hidden' => $settingData['hidden'] ?? null,
'trigger' => $settingData['trigger'] ?? null,
'order' => $order++,
];
$values[$settingKey] = [
'id' => $valId++,
'setId' => $currentSetId,
'default' => $settingData['default'] ?? null,
];
}
}
}
}
}
/**
* Check if the array is associative.
*/
function isAssoc(array $arr): bool
{
return array_keys($arr) !== range(0, count($arr) - 1);
}
/**
* Determine if an array contains at least one sub-setting (based on 'type' or 'default').
*/
function containsSetting(array $arr): bool
{
foreach ($arr as $val) {
if (is_array($val) && (isset($val['type']) || isset($val['default']))) {
return true;
}
}
return false;
}
// Run your flattening
processCategory($settings, 0, $categories, $settingsList, $values, $catId, $setId, $valId, $order);
// Dumping the results
echo "--- Categories ---\n";
echo "<pre>";
print_r($categories);
echo "--- Settings ---\n";
print_r($settingsList);
echo "--- Values ---\n";
print_r($values);
echo "</pre>";
Let me know if this helps!
I keep getting this default workspace error in kali>MSF.Have tried everything but the error keeps returning.Tried default workspace also but the error keeps returning.have tried reinstalling and upgrading all the modules in kali with sudo apt-apt update & upgrade -y
┌──(root㉿kalimacstudio3)-[~]
└─# service postgresql start
┌──(root㉿kalimacstudio3)-[~]
└─# service postgresql status
● postgresql.service - PostgreSQL RDBMS
Loaded: loaded (/usr/lib/systemd/system/postgresql.service; disabled; preset: disab>
Active: active (exited) since Mon 2025-05-05 11:40:47 IST; 49min ago
Invocation: b31109a027d74f0c94ff36c35ac76f77
Process: 13017 ExecStart=/bin/true (code=exited, status=0/SUCCESS)
Main PID: 13017 (code=exited, status=0/SUCCESS)
Mem peak: 1.7M
CPU: 5ms
May 05 11:40:47 kalimacstudio3 systemd[1]: Starting postgresql.service - PostgreSQL RDBM>
May 05 11:40:47 kalimacstudio3 systemd[1]: Finished postgresql.service - PostgreSQL RDBM>
zsh: quit service postgresql status
┌──(root㉿kalimacstudio3)-[~]
└─# msfconsole -q
msf6 > workspace -l
pentest1
pentest2
* default
msf6 > db_status
[*] Connected to msf. Connection type: postgresql.
msf6 > db_disconnect
Successfully disconnected from the data service: local_db_service.
msf6 > db_connect user4msf:9969@localhost:5432/db4msf
[*] Connected to Postgres data service: localhost/db4msf
msf6 > db_status
[-] Error while running command db_status: Couldn't find workspace default
Call stack:
/usr/share/metasploit-framework/lib/msf/util/db_manager.rb:52:in `process_opts_workspace'
/usr/share/metasploit-framework/lib/msf/core/db_manager/event.rb:55:in `block in report_event'
/usr/share/metasploit-framework/vendor/bundle/ruby/3.3.0/gems/activerecord-7.0.8.7/lib/active_record/connection_adapters/abstract/connection_pool.rb:215:in `with_connection'
/usr/share/metasploit-framework/lib/msf/core/db_manager/event.rb:54:in `report_event'
/usr/share/metasploit-framework/lib/metasploit/framework/data_service/proxy/event_data_proxy.rb:18:in `block in report_event'
/usr/share/metasploit-framework/lib/metasploit/framework/data_service/proxy/core.rb:164:in `data_service_operation'
/usr/share/metasploit-framework/lib/metasploit/framework/data_service/proxy/event_data_proxy.rb:16:in `report_event'
/usr/share/metasploit-framework/lib/msf/core/framework.rb:328:in `report_event'
/usr/share/metasploit-framework/lib/msf/core/framework.rb:377:in `on_ui_command'
/usr/share/metasploit-framework/lib/msf/core/event_dispatcher.rb:145:in `block in method_missing'
/usr/share/metasploit-framework/lib/msf/core/event_dispatcher.rb:143:in `each'
/usr/share/metasploit-framework/lib/msf/core/event_dispatcher.rb:143:in `method_missing'
/usr/share/metasploit-framework/lib/msf/ui/console/driver.rb:408:in `block in on_startup'
/usr/share/metasploit-framework/lib/rex/ui/text/dispatcher_shell.rb:530:in `block in run_single'
/usr/share/metasploit-framework/lib/rex/ui/text/dispatcher_shell.rb:525:in `each'
/usr/share/metasploit-framework/lib/rex/ui/text/dispatcher_shell.rb:525:in `run_single'
/usr/share/metasploit-framework/lib/rex/ui/text/shell.rb:165:in `block in run'
/usr/share/metasploit-framework/lib/rex/ui/text/shell.rb:309:in `block in with_history_manager_context'
/usr/share/metasploit-framework/lib/rex/ui/text/shell/history_manager.rb:37:in `with_context'
/usr/share/metasploit-framework/lib/rex/ui/text/shell.rb:306:in `with_history_manager_context'
/usr/share/metasploit-framework/lib/rex/ui/text/shell.rb:133:in `run'
/usr/share/metasploit-framework/lib/metasploit/framework/command/console.rb:54:in `start'
/usr/share/metasploit-framework/lib/metasploit/framework/command/base.rb:82:in `start'
/usr/bin/msfconsole:23:in `<main>'
msf6 >
https://pypi.org/project/AutoCAD/
https://github.com/Jones-peter/AutoCAD
refer these AutoCAD library have the ability to make group
Exectauble could not view the text file to read the words in. Once file is put next to .exe it works as expected.
I'm working with a ZKTeco SpeedFace-V3L device and it's able to connect to my server (I see logs for /iclock/cdata
and /iclock/registry
), but it's not sending any attendance data. The POST requests to /iclock/cdata
arrive, but the body is empty, so I can't extract the check-in logs (e.g., ATTLOG
).
Has anyone experienced this issue where the device connects but doesn't push user attendance logs?
Firmware version: ZMM510-NF24VB-Ver1.2.7
I'm using a custom Flask server to receive data on /iclock/cdata
.
Any idea how to configure the device or trigger log uploads?